mac online apple blackjack http://www.euro-online.org

Project Development

Brother

The process of a spectator spending hours in front of a single image absorbed in its narrative is being replaced in the digital era by a process in which the spectator expects “the stimulus of continual visual transformation” (Vaughan, 2005:5). I want to explore this new need of the viewer to be continually stimulated as inspiration for a new responsive Digital Facial Image (DFI) hopefully my DFI will respond to the viewers presence and actions creating a unique responsive interaction.

Brother will be produced using a practise based method, utilising and reflecting upon my previous projects, little Brother (fig.1) and Emblematic (fig.2). Both these projects were carried out using a practised based research method. These DFI systems explore separate interactions that the spectator can experience with a responsive artwork, but they share a common thread. Little Brother and Emblematic explore the spectators relationship with computers and their environment and invite the spectator to become a collaborator of the work.

Little Brother is a DFI system that follows the spectator around a room. This system uses a mixture of hardware and software including Processing comprising of a hidden webcam and a monitor. This work visualises the concerns of George Orwell’s 1984. It highlights the modern trend of CCTV and the evasion of privacy. The concerns are intensified and given new meaning with the use of a female figure as the source of confrontation. Little Brother explores the role of technology in modern society by deliberately highlighting questions of privacy. Art spaces traditionally allow the viewer to look at and study the artwork, not have the artwork study them. The resulting confrontation of the spectator being watched and role reversal within the traditional gallery space has an unsettling affect upon the spectator. They realise they are being watched and they themselves are also watching.

Fig. 1 little Brother

Whilst developing Little Brother I experimented with numerous combinations of technology. I researched and tested several sensors including IR, sonar and light. These sensors all worked with differing degrees of success. It was however the use of a webcam as a sensor that both worked more accurately and embodied the sense of surveillance I wished to achieve with the project. During the making process and display of Little Brother I gathered feedback from spectators of the system, these insights helped develop the piece. The main feedback topic was that the spectator felt they needed to understand the relationship between themselves and the screen instantly, this information highlighted the need to keep the themes simple and the surveillance obvious. It also raised issues with the system reliability. It became clear that faults occurred when more than one spectator was within the field of view. The face detection library became confused as to which spectator to prioritise and created a fault in the process. These tests also showed that when the spectator left the webcam’s field of vision the system lost its reference point and therefore repeated the last moments. This was reported to affect the spectators experience of the piece as they could still see the DFI and understood this as a fault. However, when these limitations were not experienced by the spectator the piece successfully visualised that act of being watched and allowed for the spectator to explore this relationship.

For the production of Brother I have reflected upon the feedback attained from Little Brother. I have devoted time to research other methods and achieve a more smooth tracking mechanism that will allow Brother to be more accurate in locating the spectators position thus improving the spectators experience. I wish to continue to use a webcam as the main sensor of the piece as it visualises the concept of surveillance. It was also raised during feedback that the webcam was instantly recognisable to the majority of the spectators and that they instinctively knew that they were being watched.  These details have therefore pushed my research into discovering other facial libraries which are able to be implemented with Processing. These new libraries have included OpenCV, JMyron and FaceDetect.

Emblematic, a collaboration with Adrian Park and the Northern Stage, uses a twitter feed to ascertain how society is feeling. By monitoring this feed the emotions that are being expressed in the tweets are ascertained. These emotions are then represented visually by playing videos that reflect the sentiments of the text. These videos match Ekman’s (1967) six universally recognised emotional states; anger, disgust, fear, happiness, sadness and surprise. Emblematic works using a mix of hardware and software. A tweet feed is passed through an API to rate the emotional value of the text, this then feeds a Processing sketch that selects a relevant video from a prerecorded library of slow motion video clips. Emblematic uses the geo-tags of every Newcastle around the Globe, of which there are 36,  to filter the tweets. This means that the emotions of each Newcastle are represented and provides Emblematic with a global data source.

Still from Emblematic

The resulting experience of Emblematic has several affects. By slowing the emotional expressions down we have allowed the spectator to glimpse the nuances that we as humans make when we communicate. The scientific term for these subtle nuances is ‘emblems’ and they are universally recognised. The spectator freely interacts with the system and understands the emotional response that the DFI is displaying. These emotional reactions are innate to all humans and therefore the viewer can be said to be looking into a mirror of themselves. Every individual makes similar movements in order to communicate emotion through their facial expressions and so each spectator will understand them and recognise themselves within them.

During the production of Emblematic user studies were carried out that allowed spectators to interact with the system in its early stages.These studies highlighted the need of the spectator to understand what the display was depicting as well as allowing the spectator to know that their contribution (tweet) had been received. These suggestions led to development of the project and with each user test, a new version was evolved. The feedback was positive with emphasis placed on the ease of the spectator to reflect upon their own mood as they could recognise that of the DFI.

Brother will further my research into the spectators interaction with the computer DFI. I want to invite the spectator to use their body to explore their relationship with these DFI’s, physically demonstrating their contribution to the artwork, as collaborator. I want to achieve what Ponty (2002) describes as our conscious experience becoming sedimented into our bodily gestures. I wish to achieve this by bringing together elements of Little Brother and Emblematic, as well as, new ideas brought about through my continuing research and reflective e-journal. During my research into face detection systems I have come across several other viable libraries. I have been experimenting with tweaking these libraries to deal with recognising facial emotions, such as a smile. This will require further development, Open CV has thus proven to be more reliable with a broader array of personalised options. The integration of face detection along with emotion recognition will allow Brother to evolve its response to the spectators presence. For example, if the viewer approaches and smiles at the DFI, the DFI will smile back. Providing a stimulus for further dialogue to occur between spectator and machine. By using a DFI I hope to build on the idea that the viewer sees a reflection of themselves within the artwork. The spectator as proven by previous user tests, due to their capital, will intuitively understand that the DFI system is reacting to their proximity. Rokeby (1995:133) suggests that “an interactive technology is a medium through which we communicate with ourselves – a mirror”. It is this idea of interacting and exploring relationships with ourselves that I wish to explore, reinventing the childhood game of hide and seek.

Brother will create responses to the spectators presence. Allowing a narrative performance to occur between the two parties, human and machine. Brother will be able to respond emotionally with the viewer inviting the spectator to become a collabrative performer, “the feelings the spectator has for their movements and perceptions in the performance of viewing the artwork are central to their experience” (Van de Vall, 2008 :141). Brother will be able to track the spectator through 180 degrees by utilising a bespoke motor sensor system, allowing a nostalgic performance of hide and seek or cat and mouse to occur. If the spectator moves beyond the webcam’s field of view Brother will pivot and follow them. This pivot system will involve using servo motors and gears to control a mounted monitor. The servo motors position will be controlled using Processing and Arduino communicating via serial. The noise of the motors will instill a greater sense of machine human relationship. By utilising facial emblems and recognisable emotional states to develop an emotional story between the spectator and system will allow me to explore how the spectator responds to emotional stimulus. From the feedback of Emblematic the spectator reflects upon their own emotional state and is in turn affected by the emotional state of the DFI. It is this same dialogue that I wish to achieve.

The main aims of this project are to:

  • Create a striking piece of digital art that explores the spectators interaction with a responsive system

  • Invite the audience to reflect upon emotional spectrum, empathy, democracy and entitlement

  • Animate a highly visible space

  • Generate greater awareness of experimental digital art amongst the general public by involving them in its creation and exhibiting it in a public space

  • Allow personal experiences of the work and encourage engagement

Other digital artists working with DFI and relevant material that investigate responsiveness and emotional interest include Andy Holtin, Tiffany Holmes, Kirsten Geisler and Niklas Roy. Andy Holtin’s Glance explores the relationship between two DFIs and our role as voyeur. Glance plays a narrative of two DFIs being caught looking at each other. However, Glance does not include the spectator and results in the spectator playing a passive role. Tiffany Holmes’ Nosce Te Ipsum utilises the spectator’s movement towards a screen as a tool. The screen depicts a collage and as the spectator approaches, layers of the collage are peeled away. When all the layers are peeled back an image of the spectator themselves is revealed at the base of the collage. This embodies the concept that the spectator is integral to the production of a piece of artwork. A limitation of this process is that it does not change between spectators and the narrative is the same, only the last image varies. Kirsten Geisler’s Dream of Beauty series looks into our relationship with the DFI and how we interact and relate with a responsive screen based work. Niklas Roy’s My little piece of Privacy playfully utilises OpenCV to block the view of passersby from peering into his studio. A curtain moves to track the passerby therefore creating  barrier between his studio and the spectator. Roy creates an interesting motor assembly and movement tracking system to metaphorically embody his views on privacy. This piece is problematic in that the process attracts attention when trying to avoid it. David Rokeby’s Gathering and Sorting Daemon gathers images of moving people outside the building and then sorts these into colours and shapes. The majority of these people however will not know that their image has been captured and incorporate into a piece of art.

Feasibility Study

Several approaches to this project have been explored yet it would be valuable to further research the use of:

  • Processing – with Java, OpenCV, GSVideo. Processing can also speak to Arduino micro processors allowing data to be transferred between the two through the use of serial.

  • OpenCV Facial recognition Library (after testing Processing’s own facial recognition library it become clear that it was not advanced enough as it became confused when more than one face is present or if the face leaves the field of view. The OpenCV library handles these tasks much more accurately with the ability to personalise its sensitivity and functionality).

  • GSVideo plug in library, alternative video library for processing that has proven to more stable and able to handle a higher bitrate.

  • Arduino – microprocessor that can control motors and read sensors. This will be used in controlling the position of the screen allowing spectators to be tracked.

  • Servo or DC motors – further tests need to be carried out as to which is more effective. Servo motors can be told to go to certain positions whereas dc motors can be run forwards and back.

  • Gear Cog system – to allow for the screen to smoothly turn

  • LCD Screen/ Monitor with integrated Webcam.

  • Bespoke Perspex Frame – to hold motor and gear assembly. Hinged to allow screen to pivot.

  • Mini Mac (or equivalent hardware to run system on) – small, high processing power, plenty of RAM for processing sketch and video library.

Install Diagram

Leave a Reply

Exhibition

Thanks

A big thanks to Flora Davoll, Tom Schofield,  Adrian Park and all the CL Techs (even Rich).

Final Code.

New SetUp Sketch

After discussion with the group we’ve moved a few things around as to allow a good flow of people around the space as well as to accomodate all exhibiters.

As can be seen I am now going to have the back left space as you enter Space 4. The screen is going to be 2.4m long by 0.69 wide. to create a panorama and keep to my 32:9 aspect ratio. I hope that this will mean that the viewer is engulfed with a full visual spectrum and allow them to explore the space successfully.

Audio Code for piece

/**Using sketch by Damien Di Fede as basis to affect audio volume as the viewer approaches the screen.
Working with a frequency of 50 and then mapping the bandwith to the viewers Z co=ordinate. This is to allow for a multi sensory experience. Not only will the sketch be aware of the users location and be reacting to them, the sound of waves will intensify as they near the screen, further bringing attention to themselves and their role within the investment and narrative within the artwork.

* Band Pass Filter
* by Damien Di Fede.
*
* This sketch demonstrates how to use the BandPass effect.
* Move the mouse left and right to change the frequency of the pass band.
* Move the mouse up and down to change the band width of the pass band.
*/

import ddf.minim.*;
import ddf.minim.effects.*;

Minim minim;
AudioPlayer groove;
BandPass bpf;

void setup()
{
size(512, 200, P2D);

minim = new Minim(this);

groove = minim.loadFile(“STE-020-MP3 File.mp3”);
groove.loop();
// make a band pass filter with a center frequency of 440 Hz and a bandwidth of 20 Hz
// the third argument is the sample rate of the audio that will be filtered
// it is required to correctly compute values used by the filter
bpf = new BandPass(440, 20, groove.sampleRate());
groove.addEffect(bpf);
}

void draw()
{
background(0);
stroke(255);
// draw the waveforms
// the values returned by left.get() and right.get() will be between -1 and 1,
// so we need to scale them up to see the waveform
for(int i = 0; i < groove.right.size()-1; i++)
{
float x1 = map(i, 0, groove.bufferSize(), 0, width);
float x2 = map(i+1, 0, groove.bufferSize(), 0, width);
line(x1, height/4 – groove.left.get(i)*50, x2, height/4 – groove.left.get(i+1)*50);
line(x1, 3*height/4 – groove.right.get(i)*50, x2, 3*height/4 – groove.right.get(i+1)*50);
}
// draw a rectangle to represent the pass band
noStroke();
fill(255, 0, 0, 60);
rect(mouseX – bpf.getBandWidth()/20, 0, bpf.getBandWidth()/10, height);
}

void mouseMoved()
{
// map the mouse position to the range [100, 10000], an arbitrary range of passBand frequencies
float passBand = 50;
bpf.setFreq(passBand);
float bandWidth = map(mouseY, 0, height, 50, 500);
bpf.setBandWidth(bandWidth);
// prints the new values of the coefficients in the console
bpf.printCoeff();
}

void stop()
{
// always close Minim audio classes when you finish with them
groove.close();
// always stop Minim before exiting
minim.stop();

super.stop();
}

Panorama

After experimentation and playing about with video transitions and blends it has (as previously mentioned) meant that I am unable to proceed as planned. I have thought about hw to include the same idea as before having separate characters appear as the user moves around. After thought and sketching I have decided to proceed with creating a panorama : A panorama (formed from Greek πᾶν “all” + ὅραμα “sight”) is any wide-angle view or representation of a physical space, whether in paintingdrawingphotographyfilm/video, or a three-dimensional model.

 

The reason for this is I will be able to create a vista, a 180* degree space that we can not achieve without the use of technology. This also will invite the viewer to move so that they can view the whole image at once, this movement in turn affecting the video that they are witnessing. Thus, they will not be able to take in the full scene unless they stop and take notice. Then if they move again the scene  will have changed and they must reflect again on the new scene.

 

 

 

Kinect Music video

YouTube Preview Image

 

Is that 2 Kinects I see?

Bash need

Having some issues with Osceleton quitting in terminal, its having segmentation errors... it seems I may need to look into bash script so I can restart the command....

!!        Run the last command again

tint()

So I’ve been having some issues with my video peice, either memory errors occur or when using GSMovie objects and converting them to PImages they decide to just draw black pixels after a short time working. So I’ve come across tint() function. Did a quick sketch and looks like its worth investing a quick sketch using video footage.

If this does not work however I have been playing about with maybe using several versions of the sketch running on different machines, therefore allowing desired results (hopefully). That or have separate characters  reacting to user presence on their own screen. Will be thinking through this over the next week. Logistics are equipment, space etc.

Sound recordings have been made now and its just getting the sketch working correctly as well as processing and grading the film footage.

Current code Examples

Code Examples

Osceleton Part of sketch Code

//import libraries so communication can happen with Kinect as well as be able to handle

//movie files

import processing.opengl.*;

import oscP5.*;

import netP5.*;

PFont f;

OscP5 oscP5;

int userCount=6; //kinect max 7

float X;

float Y;

float Z;

float xmap;

float ymap;

float zmap;

float maxZ = 100.0;

//make an array of users

User [] myUsers = new User[userCount];//make array of users to hold a string 0-6

int closestUser=1000;

void setup() {

size(screen.width, screen.height);

f= loadFont(“Serif-48.vlw”);

//load font for de-bugging

oscP5 = new OscP5(this, “127.0.0.1”, 7110);

//check each user in turn, first user 0 then user 1 etc

for (int i=0;i<myUsers.length;i++) {

myUsers[i]= new User(0, 0, 0);

}

frameRate(30);

}

void oscEvent(OscMessage msg) {

//println(“msg results “+msg.checkAddrPattern(“/user/”+str(1))  );

//closestZ = maxZ;//resetting Z

for ( int i=1; i <= userCount; i++ ) {

if (msg.checkAddrPattern(“/user/”+str(i))) {

//i must be out as if not it will bugger up message)

//need to create variable of maximum users

X = (msg.get(0).floatValue());

Y= (msg.get(1).floatValue());

Z = (msg.get(2).floatValue());

xmap = map(X, 0, 1, 0, width);

ymap = map(Y, 0, 1, 0, height);

zmap = map(Z, 0, 3.5, 0, maxZ);

myUsers[i-1].update(xmap, ymap, zmap);

//beneath not needed but kept in case

/* if ( zmap > 0.0  && zmap < closestZ ) {//using greater than zero as when kinect loses user it prints 0.0

// if( i != closestUser ) { println(“Found new user ” + i ); }

closestUser = i;//will update with loop

closestZ = zmap;//closestUser z mapped to 100 for ease

myUsers[i-1].update(xmap, ymap, zmap);

*/

}

}

}

void draw() {

background(0);

textFont(f);

returnClosestUser();

int yPos = 50;

for (int i=0;i<myUsers.length;i++) {

if (i == closestUser) {//-1 because an array so User 1 is actually 0

fill (125);

}

else {

fill(255);

}

//print zvalues to screen

//text(i + ” = ” + myUsers[i].z, 10, yPos);

text(i+” “+myUsers[i].isAlive,10,yPos);

yPos += 50;

myUsers[i].drawUserEllipse();

}

yPos += 50;

text(“closest User is “+closestUser, 10, yPos);

}

//a function which checks which user is closest and returns the index of that user eg.

//which number in the array of users they are

void returnClosestUser() {

//int indexOfClosestUser=100;

//go throught the list of users

//int localClosestUser=100;

float closestZ=1000;

for (int i=0;i<myUsers.length;i++) {

//check their zpos against the current closestZ

//if it’s less than this but isn’t 0 then…

if (myUsers[i].z<closestZ&&myUsers[i].z!=0) {

println(“got closest Users which is now ” + i);

//make this the new closest z

closestZ=myUsers[i].z;

//and this is the index of the new closest user

closestUser=i;

}

}

}

User Class Code

class User {

float x, y, z;

//previous values of x,y,z – we will use these to see if this person has moved since last

//frame or not

float px, py, pz;

int timer;

boolean isClosestUser;

boolean isAlive;

User (float lx, float ly, float lz) {

px=x=lx;

py=y=ly;

pz=z=lz;

isAlive=false;

timer=0;

}

void update(float lx, float ly, float lz) {

x=lx;

y=ly;

z=lz;

//if nothing has changed since the last frame

if (px==x&&py==y&&pz==z) {

timer++;

}

//if we are moving we are also alive

else{

timer=0;

isAlive=true;

}

//how many frames someone needs to stay still for before we consider them dead

int timeOutThreshold=60;

if (timer>=timeOutThreshold) {

isAlive=false;

timer=0;

}

px=x;

py=y;

pz=z;

}

void drawUserEllipse() {

ellipse(x, y, z, z);

//println(“values of user data are : “+x+” “+y);

}

}

Pimage Code for Video Transitions

//getting invalid memory error or set a breakpoint in malloc_error_break to debug

import processing.video.*;

Movie movieBottom;

Movie movieMiddle;

Movie movieTop;

PImage TopAlpha ;

PImage MiddleAlpha;

int transparencyTop;

int transparencyMiddle;

float playheadBottom;

float playheadTop;

float playheadMiddle;

void setup() {

size(1024, 576, P3D);

TopAlpha = createImage(width, height, ARGB);

MiddleAlpha = createImage(width, height, ARGB);

movieTop = new Movie(this, “Close1-MAgicTest PhotoGood.mov”);

movieMiddle = new Movie(this, “trimmedTest-MAgicTest PhotoGood.mov”);

movieBottom = new Movie(this, “DMTimeLapse2-MAgicTest PhotoGood-MAgicTest PhotoGood.mov”);

movieBottom.play();

movieTop.play();

movieMiddle.play();

}

void draw() {

println (mouseY);

background(0);

if(mouseY<101){

image (movieTop, 0, 0);

playheadTop = map(mouseX, 0, width, 0, movieTop.duration());

movieTop.jump(playheadTop);

println (“playingTop”);

}

else if ((mouseY<170)&&(mouseY>100)) {//next zone <(height/3) && <(height/3)*2

println(“blending”);

transparencyTop = (int) map(mouseY, 170, 100, 0.0, 255.0);

movieTop.loadPixels();

TopAlpha.loadPixels();

int x=0;

int y=0;

for (int i = 0; i < TopAlpha.pixels.length; i++) {

color myColor = movieTop.get(x, y);

TopAlpha.pixels[i]= color(myColor, transparencyTop);

x++;

if (x>=width) {

x=0;

y++;

}

}

updatePixels();

image (movieMiddle, 0, 0);

image(TopAlpha, 0, 0);

playheadMiddle = map(mouseX, 0, width, 0, movieMiddle.duration());

playheadTop = map(mouseX, 0, width, 0, movieTop.duration());

movieMiddle.jump(playheadMiddle);

movieTop.jump(playheadTop);

}

else if ((mouseY>150)&& (mouseY<251)){

image (movieMiddle, 0, 0);

playheadMiddle = map(mouseX, 0, width, 0, movieMiddle.duration());

movieMiddle.jump(playheadMiddle);

println(“playingMiddle”);

}

else if ((mouseY > 250 ) && (mouseY<385)) {

println(“blending2”);

transparencyMiddle = (int) map(mouseY, 385, 250, 0.0, 255.0);

movieMiddle.loadPixels();

MiddleAlpha.loadPixels();

int x=0;

int y=0;

for (int i = 0; i < MiddleAlpha.pixels.length; i++) {

color myColor = movieMiddle.get(x, y);

MiddleAlpha.pixels[i]= color(myColor, transparencyMiddle);

x++;

if (x>=width) {

x=0;

y++;

}

}

updatePixels();

image (movieBottom, 0, 0);

image(MiddleAlpha, 0, 0);

playheadBottom = map(mouseX, 0, width, 0, movieBottom.duration());

playheadMiddle = map(mouseX, 0, width, 0, movieMiddle.duration());

movieBottom.jump(playheadBottom);

movieMiddle.jump(playheadMiddle);

}

else if (mouseY >384) {

println (“playing bottom”);

image (movieBottom, 0, 0);

playheadBottom = map(mouseX, 0, width, 0, movieBottom.duration());

movieBottom.jump(playheadBottom);

}

}

/*still working on this to develop states according to z so that the system knows what to do and will save it getting confused/ memory leaks. Also still developing the if esle and true false statements to allow the system to access video library in correct order*/