Subscribe to the MAT Announcements email list

Subscribe to MAT Announcements

 

Human?

Yes
No 

MAT End of Year Show 2025

Dates:
UCSB Elings Hall - June 3rd
SBCAST - June 5th

www.mat.ucsb.edu/eoys2025

MAD Logo

Develop your technical literacy
and creative design skills

For more information, visit:
UCSB Summer Sessions website

Media Arts and Technology

Graduate Program

University of California Santa Barbara

Events

The Media Arts and Technology Program at UCSB presents Deep Cuts, our 2025 End of Year Show:

  • UCSB | Tuesday, June 3 | 5-8PM
    California NanoSystems Institute, Elings Hall (2nd Floor)

    Research, exhibitions, demos

    Explore the latest work of MAT’s research labs in their full splendor and experience the unique and justly famous AlloSphere.

    Paid parking is available in lot 10 (adjacent to Elings Hall)

    Click here for a map to Elings Hall

  • SBCAST | Thursday, June 5 | 6-10PM (Live performances begin 8PM)
    Santa Barbara Center for Art, Science and Technology (531 Garden Street)

    Installations, performances, urbanXR

    Enjoy Installations, performances, urbanXR, in a unique and festive setting at Santa Barbara’s extraordinary SBCAST compound.

    This second event takes place as part of Santa Barbara’s “First Thursday,” highlighting the collaboration of the MAT Alloplex Studio/Lab@SBCAST with the SBCAST and the Santa Barbara creative community. Live music performances (from 8-10 p.m) will feature the MAT Create Ensemble. Large-scale projection-mapping will be presented after dark.

    Paid parking is available in city lots 10 and 11.

    Click here for a map to SBCAST

    sbcast.org

More about the exhibition (PDF)

Image
Image

The National Taiwan Museum of Fine Arts

Abstract

Visual artificial intelligence systems increasingly shape how images are perceived, interpreted, and created, prompting new questions about the design, transparency, and experiential nature of machine vision. This thesis investigates experiential AI, an approach grounded in perceptual experience, interpretability, and interaction. Drawing from machine learning, human-computer interaction, and computational media arts, it explores how visual AI can be made not only technically capable but also perceptually meaningful and socially intelligible.

At the core of this inquiry is the development of design strategies that reveal and structure the internal processes of image analysis and synthesis. These strategies treat visual computation as a site of creative and conceptual engagement, emphasizing interpretability through multimodal and interactive systems.

Through two case studies, the thesis demonstrates how users can engage directly with the perceptual mechanisms of AI. The first focuses on visualizing the representational structures within neural networks as spatial environments, facilitating intuitive understanding of how visual features evolve. The second introduces gaze-informed conditioning techniques for generative systems, using perceptual signals to guide and evaluate image synthesis.

Together, these investigations propose a broader research direction for interactive visual computation, one that foregrounds transparency, user interpretation, and perceptual alignment. This work contributes to the growing field of explainable and human-centered AI by demonstrating that visual models can be both expressive and interpretable. It proposes new possibilities for designing visual AI systems that are not only high-performing, but also capable of reasoning about their outputs in ways that support creativity, insight, and trust.

Abstract

The integration of Artificial Intelligence (AI) into mass cultural production has brought its underlying limitations—stereotypical representations, distorted perspectives, and misinformation—into the spotlight. Many of the risks of AI disproportionately impact minority cultural groups, including Black people. Now that AI is embedded in institutions like education, journalism, and museums, it is transforming how we access and reproduce sociocultural knowledge, while continuing to exclude or misrepresent many aspects of Black culture, if they are included at all. To reimagine AI systems as tools for meaningful and inclusive creative production, we must look beyond conventional technical disciplines and embrace a radical, decolonized imagination. I am to use Afrofuturism—a methodological framework rooted in Afrodiasporic visions of the future and grounded in critical engagement with race, class, and power—as a rich foundation for creating more equitable and imaginative technological innovations.

As a starting point, I build on existing calls to develop technology that does more than mitigate harm—a space I define as liberatory technology. Within this space, I identify liberatory collections—community-led repositories that amplify Black voices—as powerful models of data curation that empower communities historically marginalized by traditional AI and archival systems. My survey of fourteen such collections reveals innovative, culturally rooted approaches to preserving and sharing knowledge. I use these findings to argue for consent-driven training models, sustained funding for community-based initiatives, and the meaningful integration of Black histories and cultures into AI systems.

Expanding on this foundation, I have conducted preliminary interviews with Afrofuturist data stewards—Black technologists who carefully collect, curate, store, and use data related to Black speculative projects. My early analysis shows that these creators approach AI with a strong sense of cultural responsibility—not only to critique it, but to retool it in service of Black life. They navigate this historically fraught technological space by reclaiming tools once used for harm and repurposing them toward Black joy, rest, and healing. Through Afrofuturist perspectives, they reimagine historical data and artifacts, envisioning futures grounded in possibility rather than oppression.

Abstract

LLMs have revolutionized digital content production, automating text generation at unprecedented scale and fundamentally restructuring media markets.

Language agents are AI planning systems that break down complex problems using natural language reasoning. However, they face persistent challenges with hallucinations and memory retention in extended conversations—core technical barriers driving current research. Most critically, LLMs—the backbone of language agents—struggle with a deceptively simple creative challenge: producing realistic, long-running dialogue that captures authentic conflict, tension, and the unpredictable dynamics of human interaction.

This project applies language agents to narrative planning for synthetic entertainment. It builds upon foundational work in computational narratology, computer planning, and agent-based AI. It addresses key challenges in narrative AI: maintaining long-term narrative structure, demonstrating character agency, and generating contextually appropriate story progressions that adhere to causal logic. Specifically, it seeks to generate an agent-based drama – generative reality television and entertainment featuring natural, believable characters with internalities and actions consistent with their principles and psyche.

My software creates a conversation modeled as a turn-based game such as chess, where each turn ends with a character’s utterance. It uses existing television tropes for characters to identify themselves and each other. These tropes are used to synthesize a personal narrative for each agent. Each character’s interpretation of the conversation creates a rich step-based “chain of thought” that drives the character's motivations and actions. It attempts to improve the quality of LLM output by providing the structural benefits of traditional formal computer planning techniques and data pipelines while enriching the “reasoning” abilities and believability of the agents.

This project also takes a position on the dire need for transparency and explainability in AI. The sudden explosion of language agents in particular has created a still-nascent market of agent-based software products with a disorienting amount of vaporware that risks creating a bubble that, if burst, risks ushering in a new AI winter.

By visualizing each step of the pipeline as well as the natural text being used within the recursive workflow, this project reveals its entire inner workings layer, rather than obfuscating further an under-recognized technology by layering it within a new black box. The result is emergent storytelling through a transparent pipeline of machine cognition.

Abstract

Most of the world’s megacities are located in the Low Elevation Coastal Zone (LECZ), which represents 2% of the world’s total land area and 11% of the global population. The number of people living in the LECZ has increased by 200 million from 1990 to 2015 and is projected to reach 1 billion people by 2050. These areas are especially vulnerable to the effects of coastal processes such as sea level rise, coastal erosion, and flooding, exacerbated by warming global climates. The goal of coastal sciences is to characterize these processes to inform coastal management projects and other applied use cases. Coastal environments are exceptionally dynamic—there are several marine and land processes that occur on a range of spatio-temporal scales to influence the hydro- and morphological profile to various extents across discrete coastal sections. The spatio-temporal sampling requirements for characterizing coastal processes, coupled with the volatile nature of the area, make in-situ sampling difficult and traditional remote sensing techniques ineffective. Therefore, bespoke remote sensing solutions are required.

Our goal is to aggregate knowledge and review modern practices that pertain to remote sensing of coastal environments for oceanographic, morphological, and ecological field research in order to provide a framework for developing low-cost integrated remote sensing systems for coastal survey and monitoring.

A multidisciplinary approach that incorporates biological, physical, and chemical data gathered through a combination of remote sensing, ground truth observations, and numerical models subject to data assimilation techniques is currently the optimal method for characterizing coastal phenomena. Data fusion algorithms that integrate data from disparate sensor types deployed in conjunction are used to produce more accurate and detailed information. There is an international network of remote sensing systems that deploy a variety of sensors (e.g., radar, sonar, and multispectral image sensors) from a range of platforms (e.g., spaceborne, airborne, shipborne, and land-based) to compile robust open-source datasets that facilitate coastal research. However, models are currently limited by a lack of data pertaining to particular environmental parameters, and the extent and regularity of high-resolution data collection projects. The scientific coastal monitoring and survey network must be expanded to address this need. Recent technological advancements—including heightened accuracy and decreased footprint of sensors and microprocessors, increased coverage of GNSS and internet services, and implementation of machine learning techniques for data processing—have enabled the development of scalable remote sensing solutions that may be used to expand the global network of environmental survey and monitoring systems.

Past Events