In this video, I used Max7 to turn the image/live video from my webcam into a “3D” image. First, I used a “jit.grab” expression to get access to my webcam. After this I used the “ayuv2luma” expression to get the luminosity of the webcam video. The next part of the code is what calls up the window and maps the image in 3 planes, x, y, and z and this is what will give the video depth. Signed norm or “Snorm” was used instead of just “norm” because the image was being mapped the wrong directions. All this did was allowed the program to create a full matrix and map the x, y, and z planes the proper direction and size. I then used “sfplay~” to be able to retrieve and play sound files along with the video. I used the averages from the sound video at 40 times a second to change how much the video will move in the z plane. Also, each of these parts has a toggle switch to turn them on and off. With all these working together you can see the live video being played and when the song is playing it causes the dark parts to recede and the light parts to extrude out from the initial flat image. I demonstrated the levels when I rotated the image and placed my hand in front of me and it protruded farther because it had the most light hitting it. Also, when the close button is pressed (towards the end of the video is an example) it will take whatever the last frame was and keep it in the window. This allows the still image to be manipulated the same way the live video was.
Here’s another example of this being used: