2009Sp Instructor TA |
MAT 594CP Open Projects in Optical/Motion-Computational Processes George Legrady Andres Burbano Elings Hall, lab 2611, CNSI Building 2nd floor - Wednesday: 2pm/6pm |
|
|
||
Course Description
|
This is an open studio projects course focused on camera, laser, and any devices connected to a computer to realize visualizations and other time-based events.
Each student will plan, realize and evaluate a project of their own. The course will function in the tradition of the studio critique where students present work-in-progress, get regular feedback from faculty and course participants. Completion of course will require a project, concept statement and project featured on the course website.
The course will be mostly lab, individual meetings, and appx 3 work-in-progress student presentations and depending on the range of participants’ interests, lectures may be given on topics such as anamorphs, experiments in multiple exposure, spatial & virtual exploration, distance/presence, reflection and penetration (x-ray, infrared, etc.), medical (MRI, PET), and astronomy, cameras that function as sensors, recording, and vision devices.
| Mailing List Archives | | |
|
Resources
|
Equipment available for exploration include various XVGA resolution firewire cameras, a 3 color Prolaser ShowCube II laser, and a MT9 3D accelerometer. | Course References | |
|
||
Final Projects Dennis Adderton |
| binocular modulation | EWE Exhibition | | Contours | EWE Exhibition | | Action Painting and Interaction | EWE Exhibition | | 3D Scene Reconstruction by Stereo Imaging | | DigitalSoundScan | | Automata MSCRI | EWE Exhibition | |
|
|
||
[4.01]...... | Introduction Previous Course Introduction PPT |
|
|
||
[4.08]...... |
Individual meetings (Assigned course meetings schedule with attached project schedules) | 2:00 Andres Burbano TA | schedule | | 2:30 Karthik Malasani | schedule | | 3:00 Pehr Hovey | schedule | | 3:30 Dennis Adderton | schedule | | 4:00 Matt Hubert | schedule | | 4:30 Monica Quinlan | schedule | |
|
|
||
[4.15]...... | Project Proposal & Schedule Presentations |
|
|
||
[4.22]...... |
Individual meetings |
|
|
||
[4.29]...... | MidTerm Presentation |
|
|
||
[5.06]...... |
Individual meetings |
|
|
||
[5.13]...... | Advanced Work-In-Progress Group Presentation |
|
|
||
[5.20]...... |
Individual meetings |
|
|
||
[5.27]...... | Individual meetings
|
|
|
||
[6.03]...... | Dead week Individual meetings | |
|
||
[6.10]...... |
Final Presentation With webpage documentation of project |
|
|
||
Karthik Malasani | In this project, I will be exploring on a camera frame work to get 3D reconstruction from set of images. The camera frame work that I will be dealing with would be a set of cameras separated in space and taking the picture of the same object. By mapping the point correspondences from different images, and due to the prior knowledge of camera placement, its possible to extract depth information precisely. Upon extraction of depth information a 3D “image” of the scene, corresponding to that view, can beconstructed.
Further more I will try to obtain complete 3D reconstruction of the scene by taking pictures using the proposed camera framework from different angles, if possible, even find the minimum number of views needed. | PDF MidTerm Presentation | |
|
|
||
Pehr Hovey | This quarter I am working with a ProLaser ShowCube laser projector that is capable of drawing curves on a wall in Red, Green or Yellow through a combination of Red and Green Lasers. My project, currently dubbed Laser Shadow, intends to use a webcam to track passersby and project their ’shadow’ on the wall behind them so it follows them. The first major part of the project involves the video image processing to extract a shadow-like curve that can be plotted using the laser. I am using OpenCV to perform the computer vision needed to process the video feed. The application is built using OpenFrameworks which is an up-and-coming C++ framework that aims to make it simple to create multimedia applications in C++. So far it has been very useful as it includes video capture support and openCV support out of the box.
I have extended the default OpenCV example to start in the direction needed to interface with the laser. Currently it converts the webcam feed to grayscale and subtracts a pre-captured background image to (hopefully) isolate just the subject-of-interest. This does not always work perfectly if the subject (in grayscale) matches an element in the background. Darker clothing works best. The resulting image is thresholded to create a straight black-or-white image to be processed. I am using Blob Detection to trace the outline of the person which works pretty well if they do not blend into the background. | PDF MidTerm Presentation | |
|
|
||
Dennis Adderton | Measurement problem. |
|
|
||
Matt Hubert | Deconstructing Action Painting -I like to use a 3D accelerometer to control a 3D environment. Initially, we will use the accelerometer to move the cursor through the space and trace the path of movement with colored lines. Similarly to a hand painting, lines of various thicknesses will follow the cursor's movement, thinning as the velocity increases. We hope to achieve an environment for "action painting", similar to Jackson Pollock's work. The environment would allow for a representation of more the physical movement of the artist as oppose to purely conceptual.
Time permitting, we would also like the cursor to interact with pre-existing 3D objects. Using collision detection, the cursor can move objects by applying the force to which the accelerometer was subject. Combining this with the painting concept, the user's actions of destructing an environment are traced by their actions of constructing; as they move and knock down objects, they leave behind a trail of paint. Potentially, the same environment that they construct through painting can later be deconstructed though collision detection. |
|
|
||
Monica Quinlan | Digital Slitscan Video. My project will consist of manipulating live digital video input and producing live slitscan videos. The slitscan process provides a surrealistic interpretation of the boundaries of space and time. The initial code will be based on the aesthetics of Zbig Rybczynski's experimental film, The Fourth Dimension. I will then experiment to produce variations in the program records images. The ultimate goal will be interactive projects, in which the distortion of the scene is controlled by participants. | PDF MidTerm Presentation | |
|
|
||
Andres Burbano TA | Modular Solution for Corridor Reactive Installations is an sketch for a set of solutions to create installations which use real time video as the main input source. This work should be understood in the context of projects done in the “Computational Optics” course. The visual component of MSCRI explores light masses in real time working with present and near past of video image. The body of the spectator affects with his/her motion the system, but nothing is attached to his/her body. In MSCRI the real time video image is self mixed or self blended but it happens using different previous frames. Visualization of algorithms are used as a curtain -mask- behind the video in real time and the previous frames. Additional video filters are explored in order to enhance the predominance of certain luminance conditions. Additionally some basic robotic elements are included in order to scan a wider area with a distance sensor. | HTML MidTerm Presentation | |
|
|
||
Past Projects
|
|
|
|
||
Grading | Completion of project 40%
|