I am Research Scientist in Human Computer-Interaction and Engineering.
I have 5 years experience conducting both quantitative and qualitative research with a focus on the development of new digital fabrication methods for novel technologies such as shape-changing displays and wearables. My work has been published at venues in Human-Computer Interaction, Interactive Systems Design, and Robotics.
– Shape-Changing Displays
– Wearable Technologies
– Tangibles and Data Physicalization
My expertise in Human-Computer Interaction (HCI) include the design and development of shape-changing displays, data physicalization and wearable technologies. By conducting a range of quantitative and qualitative user studies I was able to better understand how people use and interact with these emerging tangible technologies.
– Computer Aided Design (CAD)
– Collaborative Design
– Iterative Prototyping
I have a strong background in Design research, specializing not just in aesthetics and functionality of developing interactive hardware systems and interfaces but also in developing theoretical design principles. Especially to establish new design spaces for the next generation of fabrication methods (e.g. functional fabrication) and technologies.
– Mechanical and Soft Robotics
– Material Science (e.g. meta-materials and auxetics)
– Acoustics (e.g particle levitation and manipulation)
My expertise in hardware and mechanical engineering for actuation has been translated to robotics and my work is now moving towards soft robotic actuation solutions. My research also looks into material science and computational geometry (via MATLAB) for the design of auxetics structure that can be utilized for shape-morphing interfaces.
– Stereolithography (SLA)
– Multi-material fused filament fabrication (FDM).
– Laser Cutting Techniques
I specialize in various additive manufacturing methods including SLA and multi-material FDM 3D printing. I have published numerous research papers on digital fabrication approaches over the last 5 years that have been adopted by other researchers. Currently I am working on developing new forms of fabrication methods using acoustic technologies such as ultrasound.
MorpheesPlug: A Toolkit for Prototyping Shape-Changing UIs
Hyunyoung Kim, Aluna Everitt, Carlos Tejada, Mengyu Zhong, and Daniel Ashbrook. 2021. MorpheesPlug: A Toolkit for Prototyping Shape-Changing Interfaces. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 101, 1–13. DOI:https://doi.org/10.1145/3411764.3445786
Toolkits for shape-changing interfaces (SCIs) enable designers and researchers to easily explore the broad design space of SCIs. However, despite their utility, existing approaches are often limited in the number of shape-change features they can express. This paper introduces MorpheesPlug , a toolkit for creating SCIs that covers seven of the eleven shape-change features identified in the literature. MorpheesPlug is comprised of (1) a set of six standardized widgets that express the shape-change features with user-definable parameters; (2) software for 3D-modeling the widgets to create 3D-printable pneumatic SCIs; and (3) a hardware platform to control the widgets. To evaluate MorpheesPlug we carried out ten open-ended interviews with novice and expert designers who were asked to design a SCI using our software. Participants highlighted the ease of use and expressivity of the MorpheesPlug.
3D Printing Deformable Surfaces for Shape-Changing Displays
Aluna Everitt and Jason Alexander. “3D Printed Deformable Surfaces for Shape-Changing Displays.” Frontiers in Robotics and AI 6 (2019): 80. Front. Robot. AI, 28 August 2019 | https://doi.org/10.3389/frobt.2019.00080
We use interlinked 3D printed panels to fabricate deformable surfaces that are specifically designed for shape-changing displays. Our exploration of 3D printed deformable surfaces, as a fabrication technique for shape-changing displays, shows new and diverse forms of shape output, visualizations, and interaction capabilities. This article describes our general design and fabrication approach, the impact of varying surface design parameters, and a demonstration of possible application examples. We conclude by discussing current limitations and future directions for this work.
Laser Cutting Deformable Surfaces for Shape-Changing Displays
Aluna Everitt and Jason Alexander. 2017. PolySurface: A Design Approach for Rapid Prototyping of Shape-Changing Displays Using Semi-Solid Surfaces. In Proceedings of the 2017 Conference on Designing Interactive Systems (DIS ’17). Association for Computing Machinery, New York, NY, USA, 1283–1294. DOI:https://doi.org/10.1145/3064663.3064677
We present a design approach for rapid fabrication of high fidelity interactive shape-changing displays using bespoke semi-solid surfaces. This is achieved by segmenting virtual representations of the given data and mapping it to a dynamic physical polygonal surface. First, we establish the design and fabrication approach for generating semi-solid reconfigurable surfaces. Secondly, we demonstrate the generalizability of this approach by presenting design sessions using datasets provided by experts from a diverse range of domains. Thirdly, we evaluate user engagement with the prototype hardware systems that are built. We learned that all participants, all of whom had no previous interaction with shape-changing displays, were able to successfully design interactive hardware systems that physically represent data specific to their work. Finally, we reflect on the content generated to understand if our approach is effective at representing intended output based on a set of user defined functionality requirements.
Exploring Shape-Changing Content Generation by the Public
Aluna Everitt, Faisal Taher, and Jason Alexander. 2016. ShapeCanvas: An Exploration of Shape-Changing Content Generation by Members of the Public. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). Association for Computing Machinery, New York, NY, USA, 2778–2782. DOI:https://doi.org/10.1145/2858036.2858316
Shape-changing displays – visual output surfaces with physically-reconfigurable geometry – provide new challenges for content generation. Content design must incorporate visual elements, physical surface shape, react to user input, and adapt these parameters over time. The addition of the ‘shape channel’ significantly increases the complexity of content design, but provides a powerful platform for novel physical design, animations, and physicalizations. In this work we use ShapeCanvas, a 4×4 grid of large actuated pixels, combined with simple interactions, to explore novice user behavior and interactions for shape-change content design. We deployed ShapeCanvas in a café for two and a half days and observed users generate 21 physical animations. These were categorized into seven categories and eight directly derived from people’s personal interest. This paper describes these experiences, the generated animations and provides initial insights into shape-changing content design.