Anaglyphic stereoscopic real time 3D system.
3D Disco is an audio visual performance that I have been instrumental in creating, and is now gaining momentum in its performances as a straight AV piece. From doing small showcases with it we have noted the draw and interest it has on the public, especially young people. I want to develop what we do with 3D and create a system that is real time and interactive.
Any 3D space can be turned into a stereoscopic view, basically 2 versions of the scene are rendered, one for each eye. The difference in the views is what tricks the brain into thinking that what is being seen is a 3D scene. The most simple way of producing this is to use anaglyphic visuals, the red/cyan glasses and offset graphics. This technique can be played back on normal screens and single projection systems and only needs basic colour filters in the glasses, many of the other systems either need dual projection or complicated shutter glasses to give the 3D effect.
There will be 2 main parts to the project. The viewing element and the generative element. The generative part will be fairly straightforward with many other artists having written code already which delivers similar things, but the viewing part will be the main focus of my work.
The technique to achieving this revolves around splitting the two views that the eye would see, rendering the two views and overlaying them as the final output. The colour filters on the glasses allow each eye to see its own version, giving the 3D effect. There are many factors involved in getting this to work correctly;
- eye separation – The distance that the 2 cameras are apart
- convergence – how much ‘toe in’ the cameras have, which also has a bearing on..
- Depth of field – defined by the camera but the structure of the scene affects this
- colours within the scene – we need to make sure that no red and cyan are in the scene
- movement within the scene – The speed of animation has a strong bearing on how well the 3d effect works, this is because the brain takes a certain amount of time to make sense of what it is presented with, it doesn’t make sense at first. Still images give the best 3D effect because of this.
- distance from screen / size of screen for viewer – the position of the viewer is important as well.
The last point is a large part of the desire to make a system which produces 3D in real time, every installation is different, the size of screen, the distance the viewer is from the screen etc, so being able to adjust, mainly the separation on the cameras, in real time will allow better results without having to go through the long process of re-rendering linear video.
The end product would allow either a performer to produce a real time 3D show and installations with interactive elements.
The 2 main technologies used will be Processing and OpenFrameWorks. Processing will be perfect for sketching up the project, getting a working model and then applying this sketch to OpenFrameWorks, as the extra performance that this offers will be required for real time performance. Within Processing OpenGL will be used for the graphical production, again for the performance possibilities. I also want to investigate ways of other programming environments could be ‘plugged’ into the 3D stereographic part, as this would give a really flexible system in which the likes of VVVV could be used.
I will also be looking to use Blender to create 3D shapes for importing into the system.
The normal method is to create pre-rendered content, packages like After Effects and 3D Max have anaglyph plug-ins. For real time there has been some work in Flash, using the PaperVision plug-in and action-script. There is also some Processing and Quartz Composer patches which have been made but the results aren’t too good, the splits of colours and scales seem to be ‘faked’ instead of using real 3D space and separations. Quartz Composer and VVVV could be a really good softwares to build this project in, as the speeds it runs at are generally very good. I’d prefer to concentrate on creating this in a procedural programming language as I feel this will give me a better basis and true understanding of what is happening. It will be easier to port to a node based patching system from a coded environment than the other way round.
One of the main aspirations of this project is to develop the interactive elements for installing within an immersive environment. I already have a working relationship with a company called Igloo Vision who have a 13m domed tent with full 360 projection within it. This is used for both festivals and corporate work, and we are developing a show within it. Creating interactive elements for the Igloo is an end goal and creating the 3D engine is the start of it all.
16 Mar 2009 Finish initial presentations, basic technical specifications sorted
23 Mar 2009 research into maths involved, finalise understanding of this side of things. Research other systems (Flash, Processing, OpenGL) that are already out there. Sketch up ideas.
30 Mar 2009 Create basic reactive / interactive graphical system in Processing
6 Apr 2009 Working prototype of 3d in Processing. Combine with above for basic system
13 Apr 2009 Tweak and finalise above, Start OpenFrameWorks version
20 Apr 2009 Continue working on OF version
27 Apr 2009 Finalise OF version, test, test, test
4 May 2009 All ready for the presentation on the 7th (one week early due to other commitments)