mac online apple blackjack http://www.euro-online.org

Exhibition Documentation

Here is a extended documentation  on my installation. Thanks to everyone involved

Videos

 

Work

Rear Window

The main work relating to the portfolio is Rear Window, as o homage to Hitchcock’s film, which aesthetically influenced my work. It is a POV experience in voyeurism; the user is looking at the back yard of an apartment building trying look through the seemingly opaque windows. The footage behind the opaque windows is sourced from insecure CCTV cameras looking into

people’s own homes, revealing the Trojan Horse of using surveillance to boost security.

Scouting for a suitable location to film was time-consuming, in the end several videos were    shot in different locations, chosen according to my daily routines, a POV out my window. As discussed in chapter 6, the need for engaging content prompted testing with several different scenes.

A continuous long take video from a static point of view is playing as main footage but only a relative small crop is usually seen; the users can control, with body movement, which part of the whole image is played on screen.

A Kinect camera tracks the movements on the x, y, z axis and this data is used to control

the panning, tilting and zooming in on the footage in PD. Movement on x axis controls the panning, y axis the tilting and z axis the zooming.  When zooming close enough on the opaque windows, the data coming from the z axis controls the alpha blending with the footage underneath, and the more close the more of the footage is revealed. The PD patch uses the the ‘pix_freenect’ and ‘pix_openni’ PD externals developed by Matthias Kronlachner (available at https://github.com/kronihias ) The ‘pix_crop’ PD object was used for controlling  the crop size  but unfortunately had issues with  layered videos of different sizes, mainly because   for ‘pix_crop’ x0,y0 is bottom left corner whereas x0,y0 in GEM window is the centre the canvas; so instead the ‘translateXYZ’ object was preferred.

Three main problems needed to be overcome: optimising video for PD, smoothing and scaling the data coming from Kinect and implementing multi-user interaction.

Working with HD footage in PD was always going to be tricky. Finding a  balance between resolution, compression and size was going to be the key to  a smooth playback. Various tests were carried out testing all three issues. The first thing I look onto was compression, using a Mac Mini. The H.264 codec provides higher compression and smaller file sizes at the expense of CPU usage; next the Apple Prores codec was tested, a very popular choice for NLE editing. File sizes were significantly larger but the compression was far less CPU demanding. Out of the several flavours of Prores, the 442LT proved to offer the best balance between rise and playback smoothness. The PD GEM window could handle smooth playback at 720p resolution at 25FPS. Upscaling to 1080p produced resulted in less smooth playback at around 16-18FPS.  Adding the CCTV footage behind the windows led to significant drop in playback smoothness, and general responsiveness of the patch. M. Kronlachner suggests that for “complicated applications performance problems using PD may occur. A lower level language like openFrameworks or Cinder could be a solution for CPU intensive applications” Kronlachner (2013, p39). An additional option could be using the ‘gemframebuffer’ object by rendering the small videos into a buffer. A drastic measure, but one which could improve the CPU performance would be using a photograph (pix_image) rather than video.

The next problem needed to be solved was smoothing and scaling the data stream coming from Kinect. ‘Pix_openni’ can output real world coordinates or normalised from -1 to 1. The normalised values seems more compatible with the ‘autoscale’ object. Smoothing was done using the ‘line’ object.

Another problem relating to communication between PD and Kinect was the case when one user would get out of the range of the depth camera, ‘pix_openni’ would send a 0.5 value for each axis. This was solved with ‘change’ object, which will still output the last value before the user was lost.

Implementing multiple user interaction was problematic due to the nature of how the content relates to theoretical concerns. The original film tells the story   from one point of view, and the whole set design was build around this idea. I struggled finding rationale for adding several users other that adding a multiplier in controlling the transparency (z data from user can only go to 0.3 and adding z data from two other users will get to 0.9 almost total transparency), thus encouraging some kind of collaboration between users in order to fully reveal the  CCTV footage behind the window.

Video:

https://www.youtube.com/watch?v=L7hvA4DFSm0

 

Kinect User Tracking Synth

Further testing with Kinect and PD. I used two participants,  user tracking with pix_openni and expr object to control a synth. My approach is to start simple, test thoroughly, then try more complicated setups. Unfortunately due to the narrow space, moment was constricted especially on the x scale, for future tests I will look for a larger space and maybe three participants.

Testing Kinect with PD

Starting work for final project, right now in testing stage with Kinect and PD. The 2 methods for getting tracking data into PD: Synapse+OSC messages and pix_openni PD objetc developed byMatthias Kronlachner.

Conclusions so far: Synapse needs calibration pose for skeleton tracking, pix_openni works without it more of less accurate. I’m inclined towards the pix_openni as it’s suited for a more natural interaction. A interaction workshop is needed to research the most natural way of interaction with the work