image

NEWS & EVENTS

Past News

2023 - 2024

  • Several members of the MAT community are presenting their work at the upcoming ACM CHI conference on Human-Computer Interaction in Hamburg Germany, April 23-28.
  • Image

    Some of the works presented are:

    The Impact of Navigation Aids on Search Performance and Object Recall in Wide-Area Augmented Reality (Paper). You-Jin Kim (MAT PhD student) and Radha Kumaran (CS PhD student).

    Abstract

    Head-worn augmented reality (AR) is a hotly pursued and increasingly feasible contender paradigm for replacing or complementing smartphones and watches for continual information consumption. Here, we compare three different AR navigation aids (on-screen compass, on-screen radar and in-world vertical arrows) in a wide-area outdoor user study (n=24) where participants search for hidden virtual target items amongst physical and virtual objects. We analyzed participants’ search task performance, movements, eye-gaze, survey responses and object recall. There were two key findings. First, all navigational aids enhanced search performance relative to a control condition, with some benefit and strongest user preference for in-world arrows. Second, users recalled fewer physical objects than virtual objects in the environment, suggesting reduced awareness of the physical environment. Together, these findings suggest that while navigational aids presented in AR can enhance search task performance, users may pay less attention to the physical environment, which could have undesirable side-effects.


    Comparing Zealous and Restrained AI Recommendations in a Real-World Human-AI Collaboration Task. Chengyuan Xu (MAT PhD student), Kuo-Chin Lien (Appen), Tobias Höllerer (MAT, CS Professor).

    Abstract

    When designing an AI-assisted decision-making system, there is often a tradeo! between precision and recall in the AI’s recommendations. We argue that careful exploitation of this tradeo! can harness the complementary strengths in the human-AI collaboration to signi"cantly improve team performance. We investigate a real-world video anonymization task for which recall is paramount and more costly to improve. We analyze the performance of 78 professional annotators working with a) no AI assistance, b) a high-precision "restrained" AI, and c) a high-recall "zealous" AI in over 3,466 person-hours of annotation work. In comparison, the zealous AI helps human teammates achieve signi"cantly shorter task completion time and higher recall. In a follow-up study, we remove AI assistance for everyone and "nd negative training e!ects on annotators trained with the restrained AI. These "ndings and our analysis point to important implications for the design of AI assistance in recall-demanding scenarios.


    PunchPrint: Creating Composite Fiber-Filament Craft Artifacts by Integrating Punch Needle Embroidery and 3D Printing (Paper). Ashley Del Valle (MAT PhD student), Mert Toka (MAT PhD student), Alejandro Aponte (MAT PhD student), Jennifer Jacobs (MAT Assistant Professor).

    Abstract

    New printing strategies have enabled 3D-printed materials that imitate traditional textiles. These flament-based textiles are easy to fabricate but lack the look and feel of fber textiles. We seek to augment 3D-printed textiles with needlecraft to produce composite materials that integrate the programmability of additive fabrication with the richness of traditional textile craft. We present PunchPrint: a technique for integrating fber and flament in a textile by com- bining punch needle embroidery and 3D printing. Using a toolpath that imitates textile weave structure, we print a fexible fabric that provides a substrate for punch needle production. We evaluate our material’s robustness through tensile strength and needle compat- ibility tests. We integrate our technique into a parametric design tool and produce functional artifacts that show how PunchPrint broadens punch needle craft by reducing labor in small, detailed artifacts, enabling the integration of openings and multiple yarn weights, and scafolding soft 3D structures.

    Paper


    Fencing Hallucination: An Interactive Installation for Fencing with AI and Synthesizing Chronophotographs. Weihao Qiu (MAT PhD student) and George Legrady (MAT Professor).

    Abstract

    Fencing Hallucination is a multi-screen interactive installation that enables real-time human-AI interaction in the form of a Fencing game and generates a chronophotograph based on the audience’s movement. It mitigates the conflicts between interactivity, modality variety, and computational limitation in creative AI tools. Fencing Hallucination captures the audience’s pose data as an input to the Multilayer Perceptron (MLP), which generates the virtual AI Fencer’s pose data. It also uses the audience’s pose to synthesize the chronophotograph. The system first represents pose data as stick figures. Then it uses a diffusion model to perform image-to-image translations, converting the stick figures into a series of realistic fencing images. Finally, it combines all images with an additive effect into one image as the result. This multi-step process overcomes the challenge of preserving both the overall motion patterns and fine details when synthesizing a chronophotograph.

    Image

    chi2023.acm.org

  • Professor JoAnn Kuchera-Morin gave a talk titled "The Multi-Modality of Complex Systems - Special-Temporal Exploration", at ACM SIGGRAPH DAC SPARKS - The Art of Scientific Visualization: Perceiving the Imperceptible, on April 28, 2023.
  • Abstract

    Complex systems in nature unfold over many spatial and temporal dimensions. Those systems easy for us to perceive as the world around us are limited by what we can see, hear, and interact with. But what about complex systems that we cannot perceive, those systems that exist at the atomic or sub-atomic? Can we bring these systems to human scale and view this data just as we do in viewing real-world phenomena? As a composer working with sound on many spatial temporal dimensions, shape and form comes to life through sound transformation. What seems to be visually imperceptible becomes real and visually perceptible in the composer’s mind. As media artists we can now take these transformational structures from the auditory to the visual and interactive domain through frequency transformation. Can we apply these transformations to complex imperceptible scientific models to see, hear, and interact with these systems bringing them to human scale?

    About the SPARKS session:

    Our understanding of the world is limited by the capacity of our senses to ingest information and also by our brain’s ability to interpret it. Through the use of technology, we know that the universe we live in is far more complex and rich with information than what can be perceived by humanity. From microscopic to cosmic, information that transcends our lived experiences is difficult to comprehend. Our ability to augment our senses with technology has resulted in an accumulation of vast amounts of data, often in a form that needs to be translated to be understood. This SPARKS session explores the conceptual and creative aspects of scientific visualization.

    DAC SPARKS - The Art of Scientific Visualization: Perceiving the Imperceptible - April 28, 2023