Sound Synthesis by Computer: Models for the Interaction of the Player with Strings
Models for the interaction of the player with the instrument are fundamental to the accurate synthesis of sound based on physically inspired models. Depending on the musical instrument, the palette of possible interactions is generally very broad and includes the coupling of body parts, mechanical objects and/or devices with various components of the instrument. In this talk we focus on the interaction of the player with strings, whose simulation requires accurate models of the fingers, dynamic models of the bow, of the plectrum and of the friction of objects such as bottle necks. We also consider collisions and imperfect pressure on the fingerboard as important side effects and playing styles. Our models do not depend on the specific numerical implementation but are simply illustrated in the digital waveguide scheme.
Gianpaolo Evangelista is professor of Music Informatics at the University of Music and Performing Arts Vienna, Austria. Previously he was professor of Sound Technology at Linköping University, Sweden, researcher and assistant professor at the University “Federico II” of Naples, Italy and adjunct professor at the Polytechnic of Lausanne (EPFL), Switzerland. He received the Laurea in Physics from the University “Federico II” of Naples and the Master and PhD in Electrical and Computing Engineering from the University of California Irvine. He has collaborated with several musicians among including Iannis Xenakis (Paris) and Curtis Roads. His interests are in all applications of Signal Processing, Physics and Mathematics to Sound and Music, particularly for the analysis, synthesis, special effects and the separation of sound sources.
Suleyman will also give a lecture for the UCSB Arts and Lectures series in Campbell Hall on October 5th, 2023 at 7:30pm.
Mustafa Suleyman CBE is a British artificial intelligence researcher and entrepreneur who is the co-founder and former head of applied AI at DeepMind, an artificial intelligence company acquired by Google and now owned by Alphabet. He is currently the CEO of Inflection AI, and author of the book "The Coming Wave: Technology, Power, and the 21st Century's Greatest Dilemma."
Suleyman recently was recently interviewed on MSNBC, and discussed the future of AI and it's impact on humanity.
Dynamic Theater: Location-Based Immersive Dance Theater, Investigating User Guidance and Experience
Dynamic Theater explores the use of augmented reality (AR) in immersive theater as a platform for digital dance performances. The project presents a locomotion-based experience that allows for full spatial exploration. A large indoor AR theater space was designed to allow users to freely explore the augmented environment. The curated wide-area experience employs various guidance mechanisms to direct users to the main content zones. Results from our 20-person user study show how users experience the performance piece while using a guidance system. The importance of stage layout, guidance system, and dancer placement in immersive theater experiences are highlighted as they cater to user preferences while enhancing the overall reception of digital content in wide-area AR. Observations after working with dancers and choreographers, as well as their experience and feedback are also discussed. Co-authors are Joshua Lu, and Tobias Höllerer.
Reality Distortion Room: A Study of User Locomotion Responses to Spatial Augmented Reality Effects
Reality Distortion Room (RDR) is a proof-of-concept augmented reality system using projection mapping and unencumbered interaction with the Microsoft RoomAlive system to study a user’s locomotive response to visual effects that seemingly transform the physical room the user is in. This study presents five effects that augment the appearance of a physical room to subtly encourage user motion. Our experiment demonstrates users’ reactions to the different distortion and augmentation effects in a standard living room, with the distortion effects projected as wall grids, furniture holograms, and small particles in the air. The augmented living room can give the impression of becoming elongated, wrapped, shifted, elevated, and enlarged. The study results support the implementation of AR experiences in limited physical spaces by providing an initial understanding of how users can be subtly encouraged to move throughout a room. Co-authors are Andrew D. Wilson, Jennifer Jacobs, and Tobias Höllerer.
Synaptic Time Tunnel, SIGGRAPH 2023.
Sponsored by Autodesk, the Synaptic Time Tunnel was a tribute to 50 years of innovation and achievement in the field of computer graphics and interactive techniques that have been presented at the SIGGRAPH conferences.
An international audience of more than 14,275 attendees from 78 countries enjoyed the conference and its Mobile and Virtual Access component.
Marcos Novak - MAT Chair and transLAB Director, UCSB
Graham Wakefield - York University, UCSB
Haru Ji - York University, UCSB
Nefeli Manoudaki - transLAB, MAT/UCSB
Iason Paterakis - transLAB, MAT/UCSB
Diarmid Flatley - transLAB, MAT/UCSB
Ryan Millet - transLAB, MAT/UCSB
Kon Hyong Kim - AlloSphere Research Group, MAT/UCSB
Gustavo Rincon - AlloSphere Research Group, MAT/UCSB
Weihao Qiu - Experimental Visualization Lab, MAT/UCSB
Pau Rosello Diaz - transLAB, MAT/UCSB
Alan Macy - BIOPAC Systems Inc.
JoAnn Kuchera-Morin - AlloSphere Research Group, MAT/UCSB
Devon Frost - MAT/UCSB
Alysia James - Department of Theater and Dance/UCSB
More information about the Synaptic Time Tunnel can be found in the following news articles:
Complex systems in nature unfold over many spatial and temporal dimensions. Those systems easy for us to perceive as the world around us are limited by what we can see, hear, and interact with. But what about complex systems that we cannot perceive, those systems that exist at the atomic or sub-atomic? Can we bring these systems to human scale and view this data just as we do in viewing real-world phenomena? As a composer working with sound on many spatial temporal dimensions, shape and form comes to life through sound transformation. What seems to be visually imperceptible becomes real and visually perceptible in the composer’s mind. As media artists we can now take these transformational structures from the auditory to the visual and interactive domain through frequency transformation. Can we apply these transformations to complex imperceptible scientific models to see, hear, and interact with these systems bringing them to human scale?
About the SPARKS session:
Our understanding of the world is limited by the capacity of our senses to ingest information and also by our brain’s ability to interpret it. Through the use of technology, we know that the universe we live in is far more complex and rich with information than what can be perceived by humanity. From microscopic to cosmic, information that transcends our lived experiences is difficult to comprehend. Our ability to augment our senses with technology has resulted in an accumulation of vast amounts of data, often in a form that needs to be translated to be understood. This SPARKS session explores the conceptual and creative aspects of scientific visualization.
Released in March 2023, Xenos is a virtual instrument plug-in that implements and extends the Dynamic Stochastic Synthesis (DSS) algorithm invented by Iannis Xenakis and notably employed in the 1991 composition GENDY3. DSS produces a wave of variable periodicity through regular stochastic variation of its wave cycle, resulting in emergent pitch and timbral features. While high-level parametric control of the algorithm enables a variety of musical behaviors, composing with DSS is difficult because its parameters lack basis in perceptual qualities.
Xenos thus implements DSS with modifications and extensions that enhance its suitability for general composition. Written in C++ using the JUCE framework, Xenos offers DSS in a convenient, efficient, and widely compatible polyphonic synthesizer that facilitates composition and performance through host-software features, including MIDI input and parameter automation. Xenos also introduces a pitch-quantization feature that tunes each period of the wave to the nearest frequency in an arbitrary scale. Custom scales can be loaded via the Scala tuning standard, enabling both xenharmonic composition at the mesostructural level and investigation of the timbral effects of microtonal pitch sets on the microsound timescale.
A good review of Xenos can be found at Music Radar: www.musicradar.com/news/fantastic-free-synths-xenos.
Xenos GitHub page: github.com/raphaelradna/xenos.
There is also an introductory YouTube video:
Raphael completed his Masters degree from Media Arts and Technology in the Fall of 2022, and is currently pursuing a PhD in Music Composition at UCSB.
ACM SIGGRAPH is the premier conference and exhibition on computer graphics and interactive techniques. This year they celebrate their 50th conference and reflect on half a century of discovery and advancement while charting a course for the bold and limitless future ahead.
Burbano is a native of Pasto, Colombia and an associate professor in Universidad de los Andes’s School of Architecture and Design. As a contributor to the conference, Burbano has presented research within the Art Papers program (in 2017), and as a volunteer, has served on the SIGGRAPH 2018, 2020, and 2021 conference committees. Most recently, Burbano served as the first-ever chair of the Retrospective Program in 2021, which honored the history of computer graphics and interactive techniques. Andres received his PhD from Media Arts and Technology in 2013.
The next ACM SIGGRAPH conference is in August 2023 and will be held in Los Angeles, California s2023.siggraph.org.
EmissionControl2 is a granular sound synthesizer. The theory of granular synthesis is described in the book Microsound (Curtis Roads, 2001, MIT Press).
Released in October 2020, the new app was developed by a team consisting of Professor Curtis Roads acting as project manager, with software developers Jack Kilgore and Rodney Duplessis. Kilgore is a computer science major at UCSB. Duplessis is a PhD student in music composition at UCSB and is also pursuing a Masters degree in the Media Arts and Technology graduate program.
EmissionControl2 is free and open-source software available at: github.com/jackkilgore/EmissionControl2/releases/latest
The project was supported by a Faculty Research Grant from the UCSB Academic Senate.
Media Arts and Technology (MAT) at UCSB is a transdisciplinary graduate program that fuses emergent media, computer science, engineering, electronic music and digital art research, practice, production, and theory. Created by faculty in both the College of Engineering and the College of Letters and Science, MAT offers an unparalleled opportunity for working at the frontiers of art, science, and technology, where new art forms are born and new expressive media are invented.
In MAT, we seek to define and to create the future of media art and media technology. Our research explores the limits of what is possible in technologically sophisticated art and media, both from an artistic and an engineering viewpoint. Combining art, science, engineering, and theory, MAT graduate studies provide students with a combination of critical and technical tools that prepare them for leadership roles in artistic, engineering, production/direction, educational, and research contexts.
The program offers Master of Science and Ph.D. degrees in Media Arts and Technology. MAT students may focus on an area of emphasis (multimedia engineering, electronic music and sound design, or visual and spatial arts), but all students should strive to transcend traditional disciplinary boundaries and work with other students and faculty in collaborative, multidisciplinary research projects and courses.