mac online apple blackjack http://www.euro-online.org

oF – baby steps

Finally have found some time to get started with openFrameworks.  First impressions – errr, wtf!  def trickier to get into than Processing, but once past the initial set up and where files have to be so they compile correctly the similarities become obvious in the actual coding – just stick an ‘of’ in front of everything…

As I said, baby steps, here’s my first bit of graphics…next up animation…

project catch up….

Far far too long since I posted anything – summer, which traditionally is when we’d have a bit of time, has been bloody manic.  All good though, lots of 3D Discos and other projects in full flow.

Which all means that my work towards my project has been put to the sideline, not ideal but how things pan out.

Had a really good chat with Jamie about my project, getting down to nitty gritty of what my aims are for the project as well as the course over all – I have always focussed on my masters being a pragmatic choice that contributes to my work in the ‘real world’.  I guess my main passion is creating audio visual works which really are AV – I get sick of seeing work where the the visual element is a bolt of afterthought.  I also have a ‘thing’ about the term ‘vjing’ – its a horrible word for starters – but never liked the automatic ‘art’ labeling. Most live visuals, especially in club situations, are not much more than moving wallpaper, and that’s how it should be for a lot of these situations – the context and environment is not conducive for displaying ‘artistic’ content.  Many vj’s seem to want to introduce narrative or statements into this situation where is just does not belong.

If it is an artistic piece that is created this needs to be displayed or performed within the correct context and environment, if there is a narrative involved surely the audience needs to be able to absorb the piece and give it attention.  I would say the majority of the work I do is design opposed to art, it serves a purpose, and it is important that work I produce that is artistic in nature is placed / performed in a situation that compliments it.

Apologies for this ranting post – but it does set the scene for what I aim to attempt in my project.  There are two main strands:

1/ Audiovisual production of work.  Working through my thoughts and talking to Jamie has made me focus on what I really want to gain from the course – the production of audiovisual works!  Through his project I want to explore, through producing one or several pieces, how instead of audio + visual, where there is a hierarchical divide (usually audio comes first), a fully cohesive audiovisual work can be made.  Golan Levin has talked about ‘audiovisual material’ and this interests me – could there be specific elements of the audio and visual make up that are very close in make up, could rules be created to work from?

2/ Instrumentation – This is what I have talked about in discussions, where there is a general frustration and almost backlash to the ‘laptop performer’ – what is demonstrably ‘live’ with modern electronic performance when the artist is behind a laptop?  Working with Jon Hook has allowed me to explore creating hardware and software that could bring a gap and give the audience a stronger understanding and association with what the artist is doing and how this is effecting parts of the performance.  Here’s a screen shot of an example that Jon has coded up – its a software that runs on a ‘heads up’ multi touch surface:

How it works is that the graphics in the background (at present the teapot) are coded up in OpenGL (fingers crossed OF soon) with parameters being coded in.  each of the coloured ‘rings’ have nodes which assign a level to a parameter (so for example scale, position, rotation) and as the ‘radar pointer’ passes the node the value is assigned to the parameter.  Quite simple but we think this will be effective in being able to work in an AV sense – it works almost like a step sequencer but rotational – bpm of audio can be set as the rpm of the ‘radar’.

A long way to go with this, at present I’m starting to develop graphics for it, so have to jump into OpenGL at the deep end.  More updates to come…

Module Final Documentation

3D Disco Real Time
Basic Techniques of Interactive Media – DMS8002 – module documentation
Andrew Nixon

Introduction

Brief: Create a graphic engine which will display anaglyphic visuals which can be generated / controlled in real time.

What are anaglyph stereoscopic graphics?
Two versions of the same 3D scene are rendered, representing what each eye would see, a colour filter is then added to each render. Special glasses are worn which filter out the content so that they are specific to each eye – the cyan side only allows the right eye view through and vice versa. As the eyes receive two versions of the same ‘scene’ the brain is tricked into thinking that the flat screen is actually 3D.

3D Disco is a project that I have been working on for a few years, its a club night AV performance, pretty populist and fun, where all the visuals are in Anaglyphic 3D. We’re expanding what we do and hope to move more into full festival installations, so having elements where people can interact with the visuals is a really important element – this project is the first step towards this.

Technical concept

Initially I aimed to sketch this project in Processing and then port across to Open FrameWorks, but ended up keeping it within Processing, for reasons I will discuss later, whilst using MaxMSP as a control interface. My basic principle was to have 2 separate cameras within the environment which would render the scene for each eye, and then tint the output of each so that the colour filter of the glasses would produce the 3D effect. I then wanted to be able to have different ‘scenes’ accessible so that the basic idea of a live tool was created.

Appraisal

It was invaluable for me in my personal development working through the problems that I had with Processing, and I would go down this route again, but I do definitely want to move it to OpenFrameWorks for the power that it offers. Processing seems to deal with OpenGL quite strangely, and most of the problems stemmed from this. I used OpenGL as I wanted the speed and quality of visual which is provided from this opposed to P3D, and the glColourMask was a really good way of tinting the colour of each camera view.

Using MaxMSP was good for 2 areas – it provided a control interface as I couldn’t get the controlP5 library working within my patch and it also meant I could explore OSC (open sound control) as a system for externally controlling the graphics. This worked really well and I was able to have sound reactive and midi controls routed straight into Processing. This also gives a lot of possibilities for the expansion of the project, when many other interfaces could be used and parameters passed using OSC.

Conclusion

Looking at what I set myself as a brief I have not progressed as far as I had initially hoped, but am happy overall with what has been achieved. For it to be a useable tool much more refining has to be done but most of the principles are in place – I can control parameters in real time, which means that this can become part of an interactive installation or have other things effect it, such as the audio reactive part. I am happy with the skills I have developed in Processing (starting with zero knowledge) although there are many areas which I haven’t touched on yet which I want to.

Future development

Importing images – this is something which proved a huge hurdle in using Processing, it basically didn’t work. I need to sort this so that flat graphics can be brought in and manipulated in 3D space – most of 3D Disco is based on this principle.

Further work on the import of objects and drawing 3D shapes, perhaps look at more ways to animate and create scenes which can be manipulated.

And the main one – Port everything across to OpenFrameworks. I will be able to use real OpenGL in a framework which is designed to use it and move away from the problems of running it in Processing. A lot of work, OpenGL is vastly more complicated to work with compared to the simplicity of Processing. OF is a major part of my personal development whilst studying the Masters and this project will be both a great learning tool and a really useful exercise in reaching its potential.

Below is a zip file of the final Processing and Max files – the above text is also in .doc and .pages formats. Please also click ‘basic techniques’ tag for all previous posts which have examples of the development.

an_Anaglyph_Final.zip

anglyph attempt 1 – errrr…..

ok, getting somewhere, hacking bits of code about and trying combinations of different sketches….

My thinking is if I can get something working I can learn how it works to be able to tweak it – bit arse about face but seems to work for me.

Playing with OpenGL calls – finding documentation from a Processing point of view is a bit hard, might go back to P3D and play a bit more there first.

Attempt numbers one and two – ok, am getting 2 cameras working together, showing 2 versions of the 3D object, but then trying to integrate the OpenGL colourmask sends things into meltdown. Bit more playing should have something basic working I hope…

Anaglyph 1
Anaglyph 2

recognition….

:)

so, 3D Disco has been a hell of a lot of work over the last few years, but being confirmed at Glastonbury this year has finally caught the eye of the press – almost full page article, page 8, Independent 4/5/09…

http://www.independent.co.uk/arts-entertainment/music/news/technology-brings-a-new-dimension–to-the-glastonbury-experience-1678534.html

rock on

Oops…

so, dunno what I did, but I lost the changes anyway – Processing + Max + other shit + making code changes = OSX death…

***update – it did it again – hmmm, fresh install of OSX needed sooner rather than later…***