mac online apple blackjack http://www.euro-online.org

Mock-up code

Memo’s beginnings of a Processing / Java port of a webcam-to-osc/midi app.

This version doesn’t transmit midi, but does transmit OSC, which you can use OSCulator to forward the OSC messages to midi. The advantage of doing it this way is that you can have another computer on wifi receive the OSC messages and map to midi (and send to Logic), keeping the CPU on both machines lighter… (or just keep the oscTargetIP as 127.0.0.1 to send the OSC to the same machine and have everything running on one machine.

I can use this code as a basis for detecting hands to send midi/OSC to Max and Ableton to control filter. From there I can develop the code so that the hand movement also controls tempo and head detection controls left and right panning.

For some reason the applet doesn’t work when published on an html page, probably something to do with video input.
/***********************************************************************
———————————–

Copyright (c) 2008, Memo Akten, www.memo.tv

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.

You should have received a copy of the GNU General Public License
along with this program. If not, see .

***********************************************************************/

import oscP5.*;
import netP5.*;
import processing.video.*;
import controlP5.*;

/**************************** CONSTANTS & PARAMS **********************************/
int oscTargetPort = 8000;
String oscTargetIP = “127.0.0.1”;

int numGridX = 15;
int numGridY = 4;

float gridSpacing = 0.1; // spacing of squares

float vidMult = 0.5; // ratio of camera resolution to output res

int fps = 30;

float triggerThreshold = 0.05;
float velocityMult = 5;

/**************************** VARS ***********************************/
OscP5 oscP5;
NetAddress address;
ControlP5 controlP5;

int numPixels;
int[] prevGrey;
Capture video;

float totalMovement;

int gridSizeX;
int gridSizeY;

float gridMult = 1 / vidMult;

float[][] gridInfo;

float maxTimeDiff = 5; // trigger once every 5 seconds

PImage img;

/**************************** SETUP ***********************************/

void setup() {
size(640, 480);
video = new Capture(this, (int) (width * vidMult), (int) (height * vidMult), fps);
numPixels = video.width * video.height;
gridSizeX = video.width / numGridX;
gridSizeY = video.height / numGridY;

prevGrey = new int[numPixels];
gridInfo = new float[numGridY][numGridX];

img = createImage(video.width, video.height, RGB);

oscP5 = new OscP5(this, oscTargetPort);
address = new NetAddress(oscTargetIP, oscTargetPort);

initApp();

frameRate(fps);
}

void initApp() {
int sliderWidth = (int) (width * 0.4);
controlP5 = new ControlP5(this);
controlP5.addSlider(“triggerThreshold”, 0, 1, triggerThreshold, 20, 20, sliderWidth, 15);
controlP5.addSlider(“velocityMult”, 1, 20, velocityMult, 20, 40, sliderWidth, 15);
}

/**************************** UPDATE ***********************************/
void draw() {
if (video.available()) {
initGridInfo();

video.read(); // Read the new frame from the camera
video.loadPixels();
img.loadPixels();

totalMovement = 0;

image(video, 0, 0, width, height);
for (int i=0; i= numGridX) gridX = numGridX – 1;
if(gridY >= numGridY) gridY = numGridY – 1;
int gridNo = gridY * numGridX + gridX;

color curColor = video.pixels[i];
int curR = (curColor >> 16) & 0xFF;
int curG = (curColor >> 8) & 0xFF;
int curB = curColor & 0xFF;
// average RGB components (there are better ways of calculating intensity from RGB, but this will suffice for these purposes)
int curGrey = (curR + curG + curB) / 3;
int diff = abs(curGrey – prevGrey[i]) ;
//img.pixels[i] = 0xff000000 | (diff << 16) | (diff << 8) | diff; gridInfo[gridY][gridX] += diff; totalMovement += diff; prevGrey[i] = curGrey; } drawGrid(); totalMovement /= numPixels * 256; OscMessage oscMessage = new OscMessage("/cam/movement"); oscMessage.add(totalMovement); //if(totalMovement>triggerThreshold * 100)
oscP5.send(oscMessage, address);
}
}

void drawGrid() {
noStroke();
for(int y=0; y triggerThreshold) {
fill(255, gridMovement * 250 + 50);
OscMessage oscMessage = new OscMessage(“/cam/note”);
oscMessage.add(true);
oscMessage.add((y * numGridX + x) / (float)(numGridX * numGridY));
oscMessage.add(gridMovement * velocityMult);
oscP5.send(oscMessage, address);
}
else {
fill(255, 20);
OscMessage oscMessage = new OscMessage(“/cam/note”);
oscMessage.add(false);
oscP5.send(oscMessage, address);
}

rect((x * gridSizeX + gridSizeX * gridSpacing/2) * gridMult, (y * gridSizeY + gridSizeY * gridSpacing/2) * gridMult,
gridSizeX * (1 – gridSpacing) * gridMult, gridSizeY * (1 – gridSpacing) * gridMult
);

}
}
}

void initGridInfo() {
for(int y=0; y

Feasible architecture and data flow

IMG00692-20110124-1711

Speaking with Davy Smith (Queen Mary, London) who is a previous fellow student at Culture Lab, we discussed my project and possible options. The music engine side of things was pretty much decided; Ableton Live at the end of the cycle, receiving signals from Max/MSP. The factor I had to decide was whether to use Kinect, or whether a webcam would suffice for this relatively simpler model.

If I was to use Kinect I would have to familiarise myself with the RGBA and 3D (like wireframe) data it sends and may also have to use OpenKinect and/or OpenNI, as well as, OpenFrameworks to communicate with Max. However, it may be simpler and more feasible to use a webcam and read straight into OF to communicate with Max and Live using midi data or OSC.

To control tempo using hand movement I will look into optical flow combined with OpenCV (frame differencing, blob detection) to detect the hands and create a metronome, which controls the tempo in Ableton.

I have included a couple of extra ideas, which I will build into the sketch. I will include panning; controlled by head movements through head tracking, and also high pass and low pass filtering; controlled by the x and y position (height) of the hands by dividing the pixel array from the webcam into rectangles and tracking the movement within it.

Below is a basic system diagram sketch of the architecture and data flow:

Current practice

IMG00662-20110120-1859

I previously met with Iain Spears of University of Teeside where I discovered the controller route was a redundant path for me (as I posted earlier here)

So, I met with Steve Gibson from Northumbria University who is also working on motion controlled music: Virtual DJ. He was using FlashTrack and an array of web cams set up around a perimeter to create an ‘active’ area/zone. At different locations in the zone, different layers of the track are introduced. Flashtrack recognising position from the LED sensors you move around the zone, which then sends midi to Ableton, triggering instrumental layers. This was relevant to my practice, however, this system doesn’t offer me much in respect of specific gestures, especially using the bodies movement differencing to control tempo. With this system there is no way to achieve this. Although, the system and data flow did open up possibilities when considering the architecture of my music engine. I may only have to consider replacing FlashTrack with something else i.e. Kinect, LILA, or webcam, but I could still utilise the midi > Live operation.

I then met with Atau Tanaka to discuss my current standing. Atau is a relevant practitioner notably for: BioMuse, Sensorband, DSCP, and S_S_S. Atau explained that my end goal was over ambitious at this level of study and that people spend full PhDs on similar kinds of research. He advised that I would need to scale the project down in order to realise something solid and avoid disappointment of underachieving what I had hoped within the timescale. We discussed the architecture and data flow of the system, and considered the music engine in parallel with trying to determine input. Taking on board what was said made me think more realistically about the feasibility of the project, programming and timescale. A starting focal point was necessary as a basis for commencing development. I decided that tempo had taken prevalence throughout the development of my concept, mainly because tempo will be the dominant and primary attribute the system will look at to determine the style of the musical output. So I thought it would be most practical to begin here; looking at optical flow, combined with OpenCV (frame differencing, blob detection) to try and use hand movement as a metronome for the beat.

Below is a rough sketch of my initial and basic architectural plan.

Wizard of Oz experiement demo

I conducted and produced a Wizard of Oz experiment video that demonstrates the concept of my vision.

In order to achieve this I had to produce a track to demonstrate how the interface would operate. Below are the productions sheet that I used to plan out the piece.

Production sheet 1
Production sheet 2

I selected tracks that featured prominent effects, specific instruments or particular characteristics that stood out, so that I could best demonstrate certain features of the proposed system. Below is the video I produced using a video camera and Final Cut to piece together the audio and add subtitling to explain what is happening.

Gestural Live Music Production from Aaron Smiles on Vimeo.

Motion Capture: Gestural dance study

IMG00685-20110124-1548

I organised a motion capture session at Culture Lab to capture, record and analyse dance movement and gesture with musical genre. I enlisted the help of four dancers from varying backgrounds (pro and non-pro) to cover four different genres.

Dave Green and myself set up the motion capture system with 8 LED sensors and decided we would need 15 markers to get decent results. This covered all the limb joints, chest, back, head, shoulders, hands and feet.

I planned out the days events and arranged for the dancers to arrive at intervals throughout the day. The schedule was as follows:
0900: Set up and calibration
1100: Dancer 1
1230: Dancer 2
1400: Dancer 3
1530: Dancer 4

I got each dancer to bring along music they liked dancing to. I had them dance to 3 tracks in order to get best results. I got them to choose 2 tracks in the genre so that they were comfortable and then selected one myself from the genre to get a capture when they were out of their comfort zone. I also asked them to respond to the music naturally and without routine, so that I got a more natural and organic response. This was also the reasoning behind using both professional and non-professional dancers in the study, because both can be very different and my target user may not be a professional dancer.

Here is the list of genres and tracks selected for the dancers to perform to:
Dancer 1 (Saskia): Techno/house/trance
* Track 1: Pryda – Animal
* Track 2: Claude Von Stroke – Beat That Bird
* Track 3: Rachel Barton – Goose Step (Justin Martin remix)

Dancer 2 (Katherine): Dubstep/DnB
* Track 1: Magnetic Man – I Need Air
* Track 2: London Electricity – The Strangest Secret in the World
* Track 3: Noisia – Square Feet

Dancer 3 (Kelly): Commercial/Pop
* Track 1: Cheryl Cole – Fight For This Love
* Track 2: Willow – Whip My Hair
* Track 3: Rhianna – Only Girl
* Track 4: [Ballet style] David Guetta – When Love Takes Over

Dancer 4 (Rebecca): HipHop/Rap/Street
* Track 1: Nelly Furtado – Promiscuous Girl
* Track 2: Eminem – Crack a Bottle
* Track 3: T.I. – Top Back
* Track 4: Plan B – Love Goes Down (Doctor P remix) [hardcore/dubstep]

I video recorded the session, which I synchronised with the motion capture in Final Cut after labeling and post processing the data using Vicon. Below is an example of the capture and video synced and split screened.

Kinect bricolage

Some of the Kinect hacking going on at present..

This one could prove very useful as I could take the approach of teaching the Kinect to recognise gestures this way perhaps..

Although this would much more responsive and effective for what I aim to realise. I could analyse the data from the motion capture studies of performance artists and dancers and use it so that OpenNI recognises similar movements and gestures to then trigger similar sounds and music that the dancers from the study were originally dancing to.

Can you make a wall sigh by stroking it?

House as body, ship, vessel, organ, self.

sketching and testing

I was thinking about marrying the theoretical and the practical in my final project.

So

Take three pieces of work and approach with a heavy amount of objectivity

A performance

An installation

A sound interaction

Treat each work in a different way with different levels of participation.

So the levels could be:

1. stand alone work

2. work which is affected/activated by the presence of an audience

3. work which is changed by the interaction of an audience

4. work which is co-authored

Mres – Jane Dudman 2010-09-12 20:48:35

image001-1

Sunday 12th September 2010

Attempting to post regularly from now on to remind myself of stuff.  OK so what do I put, this does not come naturally.

notes and ramblings

THE PROJECT
Can a working practice be developed which engages the public

Where are the works which engage, that you have sympathy with that don’t ask the audience/viewers to jump through hoops that use digital media and performance.

If successful it becomes a producer of and an arena for social and aesthetic experiences temporarily interuppting singularities through the presentation of participatory art thta actively generates a discursive public space.

‘perception requires involvement’
antoni montadas 1999

Think of a project to encapsulate your research
G.S Evans (editor of Cafe Irreal an online literature magazine) –
Art Alientated, An esay on the decline of perticipatory art.

because of the self-perpetuating nature of our artistic alienation, a self-perpetuation that goes beyond the various structural aspects of capitalist society into the very heart of its ideology. An ideology, held in common by millions of people, that honestly believes that most, or even all, of our needs can be met by the purchase of commodities. Whether it be a better personality, happier life or artistic fulfillment our commodity culture believes that they can all be achieved through the purchase of a commodity, either directly as a manufactured product or indirectly by way of professional services (as provided by psychotherapists, teachers, etc.).
Of course, such a belief is not without some legitimacy, certain important needs can
indeed be satisfied through the purchase of commodities. The quality and quantity of one’s food,
shelter and basic transportation can certainly make a critical difference in one’s well-being.
Further, various forms of mass-production and automation can increase the quantity and quality
of many of these basic goods (both agricultural and industrial), while greatly reducing the labor
time necessary to make them. But when mass-production of commodities starts to replace
personal forms of expression, it becomes quite a different matter. We cannot, for example,
‘manufacture’ interpersonal relationships; we cannot substitute the watching of professional sports
for exercise; and we cannot substitute mass produced art-commodities for personal artistic –
expression

JOHnDewey – art as experience
talks
The sources of
art in human experience will be learned by him who sees how
the tense grace of the ball-player infects the onlooking crowd;
who notes the delight of the housewifein tending her plants, and
the intent interest of her goodman in tending the patch of green
in front of the house; the zest of the spectatorin poking the wood
burning on the hearth and in watching the darting flames and
crumbling coals. These people, if questioned as to the reason for
their actions, would doubtless return reasonable answers. The
man who poked the sticks of burning wood would say he did it
to make the fire burn better; but he is nonethe less fascinated by
the colorfuldramaof changeenactedbefore his eyes and imagina
tively partakes in it. He does not remain a cold spectator. What
Coleridge said of the reader of poetry is true in its way of all who
are happily absorbed in their activities of mind and body: “The
reader should be carried forward, not merely or chiefly by the
mechanical impulse of curiosity, not by a restless desire to arrive
at the final solution, but by the pleasurable activity of the journey
itself.”

As in Kapprow  – attention to life – living attentively

Even a crude experience, if au
thentically an experience,is more fit to give a clue to the intrinsic
nature of esthetic experience than is an object already set apart
from any other mode of experience. Following this clue we can
discover how the work of art develops and accentuates what is
characteristically valuable in things of everyday enjoyment. The
art product will then be seen to issue from the latter, when the
full meaning of ordinary experience is expressed, as dyes come
out of coal tar products when they receive special treatment.