mac online apple blackjack http://www.euro-online.org

The “More of the Same” PD Patch

Here is my More of the Same PD patch for the exhibit tomorrow. It’s a bit tongue and cheek.

You can download the code by clicking here.

Just open the MoreOfTheSame.pd file to try it out. The SoulCatcher.pd is simply the code which is mapped onto the parent file.

To view the theoretical ideas that sparked the creation of this patch, please click here.

Mostly, I hope this sparks some dialogue about art, progress, and art’s purpose.

What the patch does when the button is clicked is track the pitch heard on the microphone for two seconds and output the pitches as MIDI. These pitches are then converted into frequencies and stored into a table, then played back to the spectator as a computer generated version of what they just spoke or sang.

If I were going to develop this more, I would ideally convert this into an app, although many apps like this one already exist. However, the idea for this app stems from mimicry, so the more-the merrier, and the more banal the better.

In fact, I think the best life for this piece would be a gallery space and there would be no technology involved whatsoever. It will simply be a canvas displaying this exact picture:

More of the Same

Here is the code of More of the Same which can also be downloaded by clicking the link provided above:

More of the Same Code

Mimicry Theory

In looking more into anthropomorphism, and personification throughout history, I became more infatuated with the idea of mimicry.

In particular the patch I am creating basically mimics a human voice. However, there are tons of factories that build machines that aim to mimic human mechanisms…. ultimately replacing jobs that humans used to do.

And all newer digital technologies are more advanced versions of older, mechanistic technologies…. which begs the question of whether or not we are actually progressing as a society or just ruminating in our own thoughts; mimicking the ideas of our grandfathers and slapping a corporate symbol on it.

In some ways, I wonder if our obsession with technological advancements in HCI and robotics steams from an egotistic inclination to make the world around us serve as a reflection of ourselves.

Anyhow, enough of that jibber jabber. I’ll get to the idea of soul catching…. which is the name of my patch.

So, this idea of mimicry and repetition lead me to the idea of technology being able to capture a person’s soul by personifying them.

If you look through texts about new media (mostly written in the 1960′s) this idea of technology being able to capture a person’s soul runs rampant.

A book that I find most interesting is called The Invention of Morel by Adolfo Casares. If you have a moment, it’s a good, short read.

This book is a great text in many ways, but I’ll just highlight the main theme of the story. Basically, a man becomes obsessed with a woman. He is on an island that seems to be deserted, but he comes to observe this woman everyday. He then sees the woman with her husband. He becomes so obsessed that he films the couple, and then crops out the husband and overlays a video of himself and watches this video repeatedly so that he can live with her eternally.

However, the act of him mediating his body onto film results in the slow decay of his body, leading to his death.

The idea of technology being able to capture one’s soul and kill you is very strongly associated with photography. Partially because it was first used to take pictures of the dead in order to immortalize them and keep them in the family home.

This idea was also associated with voice recordings as well. Antonin Artaud had very interesting ideas about sound recordings and their ability to immortalize the body, or even act as a means of invading another person’s body. He did this by recording himself inflicting self-harm, making guttural noises, and repeating incantations. He is a very important theatrical figure who created Theater of Cruelty, however, I find his recordings and ideas on technology most interesting. It’s the idea that technologies create multiple versions of our body…. and that these versions are more important our actual selves. A terrifying thought, but an interesting one.

I would post a recording of his here, but ubuweb servers (ubu.com) are currently down! GASP!!! So check it out if you have time. I suppose what this is getting into is the ability for technology to embody us… and vice versa.

However, I think mimicry has a very interesting place in artistic practice today.

A common sense that mimicry takes is comedians doing impressions of people, and audiences tend to find this very amusing. But there is constant debate in the art world about mimicry vs. appropriation in artistic practice. For example, some of the artworks criticized are those of street artists which may copy a painting, like the Mona Lisa, and add a wig, or moustache, etc.

This serves as fuel for conversations about intellectual property and the existence of genius, I think.

As a good artist once said, “Good artists steal. Bad artists borrow.”

This has been the fuel for my patch which I’ve titled, “More of the Same.”

Which literally mimics the pitches that it hears real-time by the spectator.

I hope that it sparks some dialouge about what art is and what it’s role in society is.

Here is a quote from Aristotle’s Poesis that may summarize my meaning:

“inherent in man from his earliest days; he differs from other animals in that he is the most imitative of all creatures, and he learns his earliest lessons by imitation.  Also inborn in all of us is the instinct to enjoy works of imitation”

some great links:

Great history of mimicry:

http://csmt.uchicago.edu/glossary2004/mimesis.htm

Interesting thoughts on mimicry and art:

http://www.artandeducation.net/paper/the-mimicry-of-artistic-practices-in-not-a-novelty-why-art-institutions-still-lack-a-method-to-support-this-phenomenon/

http://artquill.blogspot.co.uk/2011/04/is-it-appropriation-or-mimicry-art.html

http://www.npr.org/2011/06/14/137111305/the-art-of-mimicry-a-trip-down-memory-lane

Ideas for Pure Data Exhibit

:D

So many ideas, so little time…..

I was thinking about integrating Pure Data with Blender and using the microphone input to control the movement of 3D objects live…

but then I starting working more with Blender and thought…. goodness gracious… this is going to take some getting used to….

Pure Data also has 3D graphic capabilities through the GEM object library (using OpenGL), but I’m beginning to hate 3D modelling in general… sooooo many vectors. It’s really making me more OCD than I already am.

So I thought, I’ll focus on sound for this one.

I started making a patch during class that can turn microphone input into MIDI data and play it back.

At first I was just using the volume level of the input and mapped those values onto MIDI values, which had a surprisingly good effect, but could certainly be better.

So I figured I would beef up the patch I already started for the project and use fiddle to determine the pitches of incoming microphone data and play it back as musical frequencies.

Song birds come to mind…. but I want to find a more theoretically justifiable idea behind this idea of mockery… especially when technology is mimicking humans.  Anthropomorphism and personification come to mind. Humans generally seem to love things that remind them of themselves. Cats wearing hats, parrots, dogs that flush the toilet.

But I’ll dig deeper into this idea in regards to the implications it has with technology and see what I dig up.

Cheers!

Sonic Reflections_Mock-Up

In working with the theme of consciousness and perception, I have chosen to make the spectator more conscious of how sound effects our perception of the space around  them and themselves.
Physical spaces and bodies are usually thought of and represented visually, but both sound and sight are the primary senses that are used to create our perception of the space around us on a continual basis.
When the human race had to hunt to survive, our ears were equally as important as our sight.
In this project, I have incorporated the sound in the room as part of the visual representation of what is before the camera.

For this installation, I want the viewer to sit  in front of the camera so that they are perceiving themselves sonically and visually as a means of re-evaluating their perception of themselves within their sonic surroundings.

Here is a schematic for how the installation would be seen and used in a gallery setting by the spectator:

Spectator interacting with installation

Spectator percieves themselves on the screen

Spectator finds that their reflection can be altered by the sound in the room, (i.e. their voice)

Proposed future development of project

If I were to develop this piece further for a gallery, I would project live camera footage of the room on a wall of the room, and place contact microphones around the perimeter and in the centre of the room. I would use the microphone data to alter the projected camera image dynamically. This would open the scope of sonic perception to everything and everyone in the space.

Creating a Sonic Camera

For the exhibition on March 14th, 2012, I have finally deciding on creating a Sonic Camera.

Or perhaps I’ll call it a Sonic Reflection.

I have done some programming tests and found that I can definitely create it within the time period given although it may not be as sleek and sexy as I’d like at the end… but it will work.

Basically, my idea is that I want the spectator to, not only, perceive themselves visually, but sonically.

In other words, I want the sound in the room and the sounds they make, to alter their visual reflection/ perception of themselves and the room around them.

So, when it’s set up, it will just be an open laptop on a table with a chair in front of it inviting the spectator to sit down.

when they sit down, they will see themselves on the laptop from the live camera feed, however, they will notice that the sound around them, and the sound they make alters their reflection.

So, when set up, it will look something like this when the spectator enters the room:

Ex. Spectator interacting with their Sonic Reflection

And it won’t be until they get closer and take a seat that they’ll begin to perceive what is going on.

The best part is that it is happening dynamically and the spectator is able to perceive themselves differently- through sound.

On the programming end of it:

I will be using the microphone and camera data real-time.

The microphone array data will be gathered and dynamically effect the pixels of the camera, real-time.

As of yet, getting and displaying the microphone data and capturing the camera on screen will be fine, I think.

However, getting the right effect and altering the pixels may be a bit more complex… but I’ll just have to try the pixels commands out and see how to do it.

I am sure I can have something finished enough at the end.

Either way, it’s an exciting challenge for me.

The Wonderful World of Arrays and For Loops

Hello All,

Here are the homework assignments for tomorrow with the exception of the oscillating

Lissajous pattern one, which I will try again later, but my brain is having trouble mapping sound

oriented things into visuals…

All of these examples needed appropriate use of Arrays and For Loops. Why?

Because it not only makes things more efficient and easier for you to code.

But it also allows you to customize the same code for other purposes by changing variables

instead of random parameters that map onto a value that you’ve already forgotten.

Why write a thousand lines when you can write two???

Anyhow. Here is a picture of my look alike for A1. Simple but effective. I started doing the others, but

they all pretty much implement the same concept, so why kick a dead horse?

My A1 Version

And here is my code for it. I am sure it can be more efficient if I were better at math:


// Assignment to create a set of drawings like those in this folder – Try and recreate the drawings A1, A2, A3

// Suggested steps in developing the code:
//  a) recreate using the most obvious means – drawing primitives and coordinates – laborious but effective
//  b) replace repetitive steps by for loops
//  c) replace parameters by variables, and place these at the top of the program

//This Code will create circles in diagonals from the top left to bottom right, vice versa

float[ ] myCircleX;
float[ ] myCircleY;
float[ ] myCircleY2; //the inverse of myCircleY to make 2nd diagonal
float distX; //the distance between the x pos of circle radius
float distY; //the distance bet. the y pos of circle radius

float circleW; //the width of the ellipse
float circleH; //the height of the ellipse
float currentX;
float currentY;
float currentY2;

void setup(){
size(500,500);
myCircleX = new float [10];
myCircleY = new float [10];
myCircleY2 = new float [10];
distX = width/10;
distY = height/10;

circleW = (distX/2)-5; //width of ellipse = radius minus offset of 5
circleH = (distY/2)-5;

currentX = circleW+5; //= radius of circle plus 5 offset
currentY = circleH+5;
currentY2 = height-(circleH+5);
//loops fills in x values for circles
for(int i=0; i<myCircleX.length; i++){
myCircleX[i] = currentX;
currentX = currentX + distX;
}

//fill in y circle values into array
for(int i=0; i<myCircleY.length; i++){
myCircleY[i] = currentY;
currentY = currentY + distY;
}

for(int i=0; i<myCircleY2.length; i++){
myCircleY2[i] = currentY2;
currentY2 = currentY2 – distY;
}

}

void draw(){
background(0);

for(int i=0; i<myCircleX.length; i++){
ellipse(myCircleX[i], myCircleY[i], circleW*2, circleH*2);
fill(255, 0, 0);
}

for(int i=0; i<myCircleX.length; i++){
ellipse(myCircleX[i], myCircleY2[i], circleW*2, circleH*2);
fill(255, 0, 0);
}

}

Here is how I made the Dark Star more efficient and connected the beginning and ending points.

It’s all just replacing repetitive things with for loops

My Vertices Fix


size(1000,1000);  //size of window
smooth(); //tells the renderer to apply anti-aliasing
background(255,0,0,255);  //set the window background color – the 255 is not strictly necessary!
int [ ] vertexX;
int [ ] vertexY;
vertexX = new int [6];
vertexY = new int [6];

vertexX[0]=250;
vertexX[1]=750;
vertexX[2]=300;
vertexX[3]=500;
vertexX[4]=700;
vertexX[5]=250;

vertexY[0]=200;
vertexY[1]=200;
vertexY[2]=500;
vertexY[3]=100;
vertexY[4]=500;
vertexY[5]=200;

//vertices are used to define the points within a shape
//and then Processing tries to make sense of them in terms of any shape you specify
//if you don’t specify anything – it will just draw lines between each point

//you must always start with ‘beginShape’ and ‘endShape’
beginShape(); //must always have this

noFill();
strokeWeight(3.0);

//replace the busy work with a for loop to draw the shape
for(int i=0; i<vertexX.length;i++){
vertex(vertexX[i], vertexY[i]);
}

/*
vertex(250,200);  //first vertex
vertex(750,200);  //next vertex
vertex(300,500);
vertex(500,100);
vertex(700,500);
vertex(250,200);
*/

endShape();  //you always have to do this

//very coyly introducing how you print to the screen (as opposed to the console)
PFont font = loadFont(“CourierNewPSMT-12.vlw”);  //first you grab a font file
textFont(font); //then you tell the program to use that font – until you specify another, for example
fill(0); //font color
//println(“250, 250″);

//replace the bulky text repetition with a classy for loop
for(int i=0; i<vertexX.length;i++){
text((“”+vertexX[i]+”,”+vertexY[i]),vertexX[i], vertexY[i]);
}

/*
text(“250, 250″,250, 250);  //then you print the text
text(“750, 200″,750, 200);  //then you print the text
text(“300, 500″,300, 500);  //then you print the text
text(“500, 100″,500, 100);  //then you print the text
text(“700, 500″,700, 500);  //then you print the text

*/

Processing – 2 – More Programming (Tom Schofield), 16/2/11

This week we continue with some programming fundamentals namely: arrays and array lists “for” loops functions and scope classes libraries Aims and Objectives This session is once again designed to be as “ground up” as possible within the restraints of time. We will focus more on the tricky bits that are likely to trip people [...]

Development Procedure

Potential Encounter Scenario (‘User’ Scenario)

You notice a strange orange glow emanating from a back alley on your way home from work. You’ve walked past this same part of town almost every day for the past three years and its always been the same litter strewn back street as always, but tonight you notice a strange orange glow emanating from the far end. You begin to move in for a closer, and see a small house-like shape with a pupil of subdued orange light in the center of the roof. It’s an eye. Whatever this paper craft model is, it appears to be sleeping. Not that it this thing has circadian rhythm – it’s just a piece of cardboard – probably some kind of arts project. Suspiciously, you step forward, and feel a sudden jump in your heart as the eye opens, throwing a beam of light onto your body, accompanied by a strange mechanical sound. Who the hell put this thing here? You take out your ipohone, capture a quick bit of footage, and head back home out of the cold.

 
 

TECHNICAL DESCRIPTION and STAGES OF DEVELOPMENT

 

Proximity Activated LED

 
 

Click for video.

 

 
Ping and Led Code:

const int pingPin = 7;
const int led = 9;

void setup() {
  // initialize serial communication:
  Serial.begin(9600);
  pinMode (9, OUTPUT);
}

void loop()
{
  // establish variables for duration of the ping,
  // and the distance result in inches and centimeters:
  long duration, inches, cm;

  // The PING))) is triggered by a HIGH pulse of 2 or more microseconds.
  // Give a short LOW pulse beforehand to ensure a clean HIGH pulse:
  pinMode(pingPin, OUTPUT);
  digitalWrite(pingPin, LOW);
  delayMicroseconds(2);
  digitalWrite(pingPin, HIGH);
  delayMicroseconds(5);
  digitalWrite(pingPin, LOW);

  // The same pin is used to read the signal from the PING))): a HIGH
  // pulse whose duration is the time (in microseconds) from the sending
  // of the ping to the reception of its echo off of an object.
  pinMode(pingPin, INPUT);
  duration = pulseIn(pingPin, HIGH);

  // convert the time into a distance
  inches = microsecondsToInches(duration);
  cm = microsecondsToCentimeters(duration);

  Serial.print(inches);
  Serial.print("in, ");
  Serial.print(cm);
  Serial.print("cm");
  Serial.println();

  delay(100);

if  (cm < 20) { //if object is closer than 20 cm to the sensor...
// turns the LED on
digitalWrite(led, HIGH);
delay (1000); // Wait 1 second
}
else {
  digitalWrite(led, LOW); //if the object is  further away than 20 cm the program turns the led off
}

}

long microsecondsToInches(long microseconds)
{
  // According to Parallax's datasheet for the PING))), there are
  // 73.746 microseconds per inch (i.e. sound travels at 1130 feet per
  // second).  This gives the distance travelled by the ping, outbound
  // and return, so we divide by 2 to get the distance of the obstacle.
  // See: http://www.parallax.com/dl/docs/prod/acc/28015-PING-v1.3.pdf
  return microseconds / 74 / 2;
}

long microsecondsToCentimeters(long microseconds)
{
  // The speed of sound is 340 m/s or 29 microseconds per centimeter.
  // The ping travels out and back, so to find the distance of the
  // object we take half of the distance travelled.
  return microseconds / 29 / 2;
}

 
 

Proximity Activated Servo

 

Click for Video

 

 
Ping and Serve Code:

const int pingPin = 7;
const int servo = 9;

#include  
 
Servo myservo;  // create servo object to control a servo 
                // a maximum of eight servo objects can be created 
 

void setup() {
  // initialize serial communication:
  Serial.begin(9600);
myservo.attach(9);  // attaches the servo on pin 9 to the servo object 
}

void loop()
{
  // establish variables for duration of the ping, 
  // and the distance result in inches and centimeters:
  long duration, inches, cm;

  // The PING))) is triggered by a HIGH pulse of 2 or more microseconds.
  // Give a short LOW pulse beforehand to ensure a clean HIGH pulse:
  pinMode(pingPin, OUTPUT);
  digitalWrite(pingPin, LOW);
  delayMicroseconds(2);
  digitalWrite(pingPin, HIGH);
  delayMicroseconds(5);
  digitalWrite(pingPin, LOW);

  // The same pin is used to read the signal from the PING))): a HIGH
  // pulse whose duration is the time (in microseconds) from the sending
  // of the ping to the reception of its echo off of an object.
  pinMode(pingPin, INPUT);
  duration = pulseIn(pingPin, HIGH);

  // convert the time into a distance
  inches = microsecondsToInches(duration);
  cm = microsecondsToCentimeters(duration);
  
  Serial.print(inches);
  Serial.print("in, ");
  Serial.print(cm);
  Serial.print("cm");
  Serial.println();
  
  delay(100);

if (cm < 80) {
myservo.write(0);
}
else {
  myservo.write(100);
}

}

long microsecondsToInches(long microseconds)
{
  // According to Parallax's datasheet for the PING))), there are
  // 73.746 microseconds per inch (i.e. sound travels at 1130 feet per
  // second).  This gives the distance travelled by the ping, outbound
  // and return, so we divide by 2 to get the distance of the obstacle.
  // See: http://www.parallax.com/dl/docs/prod/acc/28015-PING-v1.3.pdf
  return microseconds / 74 / 2;
}

long microsecondsToCentimeters(long microseconds)
{
  // The speed of sound is 340 m/s or 29 microseconds per centimeter.
  // The ping travels out and back, so to find the distance of the
  // object we take half of the distance travelled.
  return microseconds / 29 / 2;
}
}

 

Servo with Eye and Light Test

 

Footage

 

Rationale for Technology Choices

For detecting the proximity of the ‘user’ or the human animal who might happen across this on their way home from a late night at the office, I chose to use an Ultrasonic Rangefinder (over other proximity sensors such as an IR sensor) because the piece is designed to be situated outside where light interference would cause problems with detection.

The Ping US Rangefinder is fairly accurate to within about 3cm, which suited this work as the servo needs to be activated when the user is about one meter away. A potential problem with this choice is that the user may be wearing sound absorbing materials such as wool, but in my testing this did not appear to be a problem.

My choice of servo was limited due to financial constraints at the time. The flaw of this basic servo is in the distinct sound it makes upon activation – something which immediately reveals the electo/mechanical interior of what might other wise be a fairly enigmatic structure. A better choice would have been a small rotary motor which could have been set up to receive a smaller charge (thus quieter mechanical sound) that winds up a piece of fabric that would be painted to resemble a closed eye. The effect would have been closer to feeling like your approach suddenly woke this creature up, and its now looking you in eye.

Dreaming Homes – Concept

Basic Description
Sensor activated, made-from-waste forms that attempt to evoke a sense of mystery and character in what otherwise might seen as inanimate husk of matter.
Film

 

Concept and Background Thought

What is the relationship between the Ecological crisis and our day to day sensory experience of the world we inhabit? This is the question my work on the Doing module has grown out of, and will be explored in more depth in the Thinking module.

The central concept is that today the vast majority of those who inhabit the modern technological landscapes are victim of a perceptual crisis which, in the West, has its roots in Descartes’ mind-body split and in the emerge of a language whose written words (including these letters) are entirely divorced from the world they re-present.

The perceptual crisis has many other roots, but the belief in nature as an inert machine devoid of any sensitivity or sentience has become deeply entrenched in today’s western culture.

Do we have any chance of healing our conflicted relationship with the planet that grew us and continues to sustain us we continue to speak of the more than human community as a set of resources, devoid of any sensitivity or sentience?

Will these cardboard forms solve the crisis of perception? No. They are an early experiment in evoking a sense of intrigue and personality in discarded material salvaged from the city.

DOING: Max MSP – Fragmentation

The theme the Max MSP doing module was “fragmentation”. To me, “fragments” suggest fragments of memory, or remembering fragments of things, so I wanted to make something that would record snippets of the user’s life.

The first thing I had to do was establish precisely what aspects of someone’s life I was trying to capture. What parts of your day are actually worth savouring? It’s not gonna be things like buying milk, watching TV, or cooking dinner… you’re going to value more important things than that. So I decided to capture moments when the user experiences an emotional surge, ie. when they are excited, or distressed, or nervous.

To do this I created a circuit which uses an LED and an LDR to pick up your heart beat. It works by detecting the change in density in your finger; when your heart beats, your blood pulses into your finger and it momentarily becomes denser. Between pulses, the blood flows out and your finger becomes more transparent.

The LDR detects these minute changes in density. I used Arduino2Max (as, after battling with both for a while, it seemed decidedly more simple than Firmata etc!) to read in the values from analogue port zero.

So basically, my Max MSP patch reads in the analogue pin and, when the pulse hits 80 beats per minute or above (the approximate “elevated heartrate” for an average person), it records a 5 second snippet of video and sound. These are all saved, with the idea being that you could look back through them and have a record of fragments of your high and low moments. What I really like about it, is that you might not even be aware of some of them; you could look back through your day and find little fragments you weren’t aware were making you feel particularly happy or sad.

Here is my patch in full, please excuse my tragic face, I am full of the cold!

And in presentation mode, with the playback/viewing window becoming more of a feature:

I think the idea could have some nice applications within wearable technology. A rough concept I thought of would be for a “memory hoody”… sew a small camera into the front of the hood, run the sensor down out of the sleeve, and you could capture memories all day. The circuitry could be stored in a small pocket somewhere on the hoody.

All in all, my Max MSP experience hasn’t been as bad as I thought it would be. I found it a bit frustrating at first, as everything I do is very physical, and everything I thought I wanted to do with Max would have been much better realise with another program, so I had to really think hard about what I wanted to achieve. I never really do anything screen-based, so this was a first for me. I even managed to put aside the electronics to concentrate on the patch!

Things I would have liked to have done include adding a timestamp to the videos, and maybe a more comprehensive playback/viewing screen and controls. I think I’ve done okay though – I wanted to interface with Arduino, which I managed, and I did achieve the objectives I set out to.