Kurt Kaminsky, MS MAT (2017)
The convergence of GPUs and spatial sensors fosters the exploration of novel interactive experiences. Next generation audiovisual synthesis instruments benefit greatly from such technologies, their components requiring significant computing resources and robust input methods. One technique that shares these requirements is physical modeling. The expressive potential of real-time physical modeling is rarely used in the domain of visual performance.
Melange is an audiovisual instrument that maps gestural input to a highly evocative real-time fluid dynamics model for image and sound synthesis. The system consists of a small depth camera which transcribes physical gestures to velocity input and a MIDI controller for parameter adjustment and chord selection. Audio is generated by the displacement of a spring-mass system that acts as virtual strings resting on top of—and influenced by—the fluid simulation. By oscillating over the strings at different rates, and by modulating properties of the fluid and the springs, a rich visual and sonic palette can be produced.
Melange, a mixture of often incongruous elements, combines several concepts and techniques to study dynamic fluid simulation as a material for artistic instrument design. More broadly, the project is part of at least three centuries of experiments in fusing abstract image and sound, sometimes called visual music or color music. While real fluids have been used before in performance settings, such as the liquid light shows of the 1960s, their connection to sound and music has by necessity been interpretive or premeditated. A meaningful association between real fluids and music is difficult to characterize since one of the defining properties of a fluid—its velocity—is difficult to measure with appreciable granularity or speed. Now however, because high fidelity simulations can be performed in real time, it is possible to apply the velocity field to instrument design.