mac online apple blackjack http://www.euro-online.org

Project work for ‘HSS810 Creative Digital Practice’

mental_maps_presentation-01

After so much struggle with the research part of the project, I ended up doing the idea I always had in my head for this project (it was sort of my plan B). As I wanted to do something different to this idea (I try to avoid working with first ideas, although sometimes I do) I spend a lot of time reading books, papers, researchs projects about dreams and about maps from other approaches. Unfortunately, I didn’t find enough useful material to produce a creative project with a logic and serious foundation.

The project is the creation of people’s mental map of Madrid (Spain), my hometown. During my Christmas break in Madrid, I asked people to draw their mental map of Madrid (just the town’s centre). They had to draw the picture they had of Madrid in their minds without checking google maps or any other kind of map. I always told them that they had to draw the map like if it was for a stranger who has never been in Madrid and he would use that map as a guide to walk through Madrid’s streets.

Here you can the see the whole presentation.



Exhibition Documentation

Here is a extended documentation  on my installation. Thanks to everyone involved

3xCCTV live stream in PD Gem

Modified the Python script to allow multiple image capture, here’s an example with 3 cctv’s

Python scraping and PD GEM

I found a workaround for “streaming webcams into PD gem, a Python script downloads an image every second, and the same time  the PD patch refreshes the image every second. It’s not smooth framerate but it’s live.

Videos

 

Work

Rear Window

The main work relating to the portfolio is Rear Window, as o homage to Hitchcock’s film, which aesthetically influenced my work. It is a POV experience in voyeurism; the user is looking at the back yard of an apartment building trying look through the seemingly opaque windows. The footage behind the opaque windows is sourced from insecure CCTV cameras looking into

people’s own homes, revealing the Trojan Horse of using surveillance to boost security.

Scouting for a suitable location to film was time-consuming, in the end several videos were    shot in different locations, chosen according to my daily routines, a POV out my window. As discussed in chapter 6, the need for engaging content prompted testing with several different scenes.

A continuous long take video from a static point of view is playing as main footage but only a relative small crop is usually seen; the users can control, with body movement, which part of the whole image is played on screen.

A Kinect camera tracks the movements on the x, y, z axis and this data is used to control

the panning, tilting and zooming in on the footage in PD. Movement on x axis controls the panning, y axis the tilting and z axis the zooming.  When zooming close enough on the opaque windows, the data coming from the z axis controls the alpha blending with the footage underneath, and the more close the more of the footage is revealed. The PD patch uses the the ‘pix_freenect’ and ‘pix_openni’ PD externals developed by Matthias Kronlachner (available at https://github.com/kronihias ) The ‘pix_crop’ PD object was used for controlling  the crop size  but unfortunately had issues with  layered videos of different sizes, mainly because   for ‘pix_crop’ x0,y0 is bottom left corner whereas x0,y0 in GEM window is the centre the canvas; so instead the ‘translateXYZ’ object was preferred.

Three main problems needed to be overcome: optimising video for PD, smoothing and scaling the data coming from Kinect and implementing multi-user interaction.

Working with HD footage in PD was always going to be tricky. Finding a  balance between resolution, compression and size was going to be the key to  a smooth playback. Various tests were carried out testing all three issues. The first thing I look onto was compression, using a Mac Mini. The H.264 codec provides higher compression and smaller file sizes at the expense of CPU usage; next the Apple Prores codec was tested, a very popular choice for NLE editing. File sizes were significantly larger but the compression was far less CPU demanding. Out of the several flavours of Prores, the 442LT proved to offer the best balance between rise and playback smoothness. The PD GEM window could handle smooth playback at 720p resolution at 25FPS. Upscaling to 1080p produced resulted in less smooth playback at around 16-18FPS.  Adding the CCTV footage behind the windows led to significant drop in playback smoothness, and general responsiveness of the patch. M. Kronlachner suggests that for “complicated applications performance problems using PD may occur. A lower level language like openFrameworks or Cinder could be a solution for CPU intensive applications” Kronlachner (2013, p39). An additional option could be using the ‘gemframebuffer’ object by rendering the small videos into a buffer. A drastic measure, but one which could improve the CPU performance would be using a photograph (pix_image) rather than video.

The next problem needed to be solved was smoothing and scaling the data stream coming from Kinect. ‘Pix_openni’ can output real world coordinates or normalised from -1 to 1. The normalised values seems more compatible with the ‘autoscale’ object. Smoothing was done using the ‘line’ object.

Another problem relating to communication between PD and Kinect was the case when one user would get out of the range of the depth camera, ‘pix_openni’ would send a 0.5 value for each axis. This was solved with ‘change’ object, which will still output the last value before the user was lost.

Implementing multiple user interaction was problematic due to the nature of how the content relates to theoretical concerns. The original film tells the story   from one point of view, and the whole set design was build around this idea. I struggled finding rationale for adding several users other that adding a multiplier in controlling the transparency (z data from user can only go to 0.3 and adding z data from two other users will get to 0.9 almost total transparency), thus encouraging some kind of collaboration between users in order to fully reveal the  CCTV footage behind the window.

Video:

https://www.youtube.com/watch?v=L7hvA4DFSm0

 

Inspiration

Constable: The Great Landscapes X-Ray Examination Tate Britain August 2006
The X-Ray installation is a life size projection of “Salisbury Cathedral from the Meadows (1831) and consists of layered video of the painting with an X-Ray examination of the same painting. Movement of the audience in front of the painting trigger the revelation of the X-Ray under-layer. It was designed for a multiuser experience, the more were in front of the painting the more of the X-Ray layer was revealed. The interaction was simple and engaging and that was the basis of the success of the piece: “The success of the X-Ray Examination primary arises from the intuitive form of engagement facilitated by the gestural interface.“(Lehn, Hindmarsh 2007, 1486)

Inspiration

pastedGraphic.png

Here’s  a look at several media and artworks that inspired the projects in one way or another according to the three main concerns: theoretical, aesthetic and technical.

Alex May Shadows of Light

Alex may’s Shadows of Light video installation, uses Kinect and Processing to create slow silhouette portraits of audience members thus: “explores the concept of ‘slow interaction’: rather than responding to quick movement, it requires viewers to slow down and stand still, when it will slowly take the viewers silhouette and uses it as a digital stencil” (May, 2009 online) I enjoyed the idea of slow interaction, as it denies the instant gratification that seems so pervasive in our culture.

Kyle McDonald Exhausting a crowd

McDonald describes his work as it “speaks to the potential of a perfectly automated future of surveillance, enabled by a distributed combination of machine and human intelligence. A beautiful record of the energy present in shared space, as well as a disturbing look into the potential for control in a dystopian environment.”(McDonald 2015, online) It is a 12 hour long real time video of Piccadilly Circus in London. The user has the ability to zoom in  the footage at any point, and ad their own comments, as an exercise in guessing and interpreting the actions of the people in the footage.  Although shot in 4K it certainly has the look of CCTV footage, especially when zoomed in. It feel like the digital media version of John Smith’s film “Girl with chewing gum”.

 A. Hitchcock Rear Window

Film critic Roger Ebert describes the main protagonist in the film as he is “is trapped in a wheelchair, and we’re trapped, too–trapped inside his point of view, inside his lack of freedom and his limited options. When he passes his long days and nights by shamelessly maintaining a secret watch on his neighbours, we share his obsession. “ (Ebert, 2000,online)

One thing that in common with the film’s protagonist is my background in photography. Generally in Hollywood films, the representation of photographers is negative, ranging from voyeurs to murderers. BlackLab’s video “Photographers” (Vimeo, online) is a collage of clips from various films and it doesn’t paint a flattering picture.  In Rear Window the photographer is a voyeur, and the whole film is presented from his point of view, and the audience shares it as he uses the camera’s telephoto lens to spy on his neighbours. Ebert goes on saying “It’s wrong, we know, to spy on others, but after all, aren’t we always voyeurs when we go to the movies? Here’s a film about a man who does on the screen what we do in the audience–look through a lens at the private lives of strangers.” (Ebert, 2000 online).

 

Weather Garden Workshop1

30

I made the  Weather Garden workshop a few days ago.

Date: 15/08/2015  3pm-6pm

Place: Magnet Court

Number of participants: 12

I invited 12 people  to participant in my Weather Garden workshop. I introduced the basic function of this interactive installation before visitors participated in it. In order to run a better effect, I  made  simulative weather phenomenas. For instance, I used flashlight to replace sunlight and water drops to replace rain drops,only in this way can visitors experienced the  weather change obviously.

         

 

I define Weather Garden as an emotional interactive installation which can provide visitors unique experience about weather change in other place/country. By taking advantages of  wireless transmission, visitors can easily get the weather conditions and images in anywhere around the world they have special memories.

 

Weather Garden Practice2: Wind anemometer test

22

Click here to view the embedded video.

These days, I am keen on the selection of wind speed sensor due to the previous one always out of stock. I wrote to Timothy Shaw to give me some advise to choose and test the wind anemometer. He suggested me to use wind anemometer for N96GY  as backup option. I bought one form Maplin and started to test it. Firstly, I need to connect it with Arduino then get a reading every time it spins. Secondly, I have to do some maths to work out how many times it spins in a give timeframe.

The test method is very simple, operator just needs to have a fan with 3 or more speed options and blowing wind anemometer in different speed option respectively.

    

During the process of wind anemometer test, I became successful after many failures. For instance, I have tried many test code but could not get the right spins speed value from Arduino’s serial monitor.

Version 1

void setup() {
// put your setup code here, to run once:
Serial.begin(115200);
}

void loop() {
// put your main code here, to run repeatedly:
int val = analogRead(A4);
Serial.println(val, DEC);
delay(1000);
}

Outcome:

1023

1023

1009

1023

1023

1023

974

……

Version 2

void setup() {
// put your setup code here, to run once:
Serial.begin(115200);
delay(2000);

pinMode(2, INPUT);
attachInterrupt(0, count, CHANGE);
}

int i = 0;

void count() {
i++;
}
void loop() {
Serial.println(i);
i = 0;
delay(5000);
}

Outcome:

0

0

0

0

0

……

Finally, I got the right spins speed value by modifying  the electric circuit(add a 10K resistance) and Arduino code.

         

Code:
uint16_t start = 0;
volatile uint16_t rev = 0;

void setup() {
// put your setup code here, to run once:
Serial.begin(115200);
delay(2000);

start = millis();
pinMode(2, INPUT_PULLUP);
attachInterrupt(0, count, CHANGE);
}
void count() {
rev++;
}
void loop() {
Serial.println(rev);
rev = 0;
delay(5000);
}

Outcome:

First gear

44

64

55

53

41

36

26

33

42

……

Second gear:

122

105

95

105

111

113

……

Third gear:

164

168

160

167

185

164

……

In the next step, I will try to figure out how many times the wind sensor spins in a give timeframe.