quicktime of installation video (9.5MB) FM |
Project Description FM is an interactive, dual location video installation. In FM, two separate spaces are joined together by a third space created by the video projections in each space. When a participant first enters one of the installation rooms, they are presented with an image that only hints at the existence of a connected second space. Only traces and outlines of movement are visible in the image. A camera in each location is used to determine participant's location in each space. The cameras are set up in such a way that there are on-to-one mappings of location from one camera to the other. When participants occupy the "same" location in both spaces, a visual communication channel opens up in the video projections that corresponds precisely to the intersection of the two participant's bodies. At first, the intersecting body parts of the two participants are blended together in a straightforward manner, but as the size of the intersection increases, the bodies of the participants a warped in time, creating a combined spatio-temporal communication zone. Work For this project, I managed the production side of things, directing and integrating the various bits of visual design and software production. I created a video filter for jitter called xray.jit.timecube that allows an arbitrary temporal mapping of a video signal as well as a jitter patch that renders a video image as if the scene were viewed through ground glass. I then integrated patches designed by the group for background subtraction and motion detection into the main project patch. Future
In order to take this project the next step from prototype to working installation, issues concerning lighting of the installation space and the speed of the video processing software will need to be improved. For the lighting, we need to experiment with different setups and segmentation algorithms in order to find a balance between algorithmic complexity, quality of the segmentation, and speed of the processing. Currently, we are using a simple background subtraction algorithm with thresholding along with a rudimentary lighting system. The algorithm could be improved by adding rudimentary morphological analysis so that pixels can be said to belong to a shape. This will reduce errors in borderline thresholding cases where pixels within a thresholded form are not counted as outside it simply because their difference in value from the background is below the threshold. Instead, the pixels would have a sense of their neighborhood and based of a combined metric of distance from the background and neighborhood relations could be said to be part of the shape or not.
|