porno film
sikiş videoları
porno izle

The “More of the Same” PD Patch

Here is my More of the Same PD patch for the exhibit tomorrow. It’s a bit tongue and cheek.

You can download the code by clicking here.

Just open the MoreOfTheSame.pd file to try it out. The SoulCatcher.pd is simply the code which is mapped onto the parent file.

To view the theoretical ideas that sparked the creation of this patch, please click here.

Mostly, I hope this sparks some dialogue about art, progress, and art’s purpose.

What the patch does when the button is clicked is track the pitch heard on the microphone for two seconds and output the pitches as MIDI. These pitches are then converted into frequencies and stored into a table, then played back to the spectator as a computer generated version of what they just spoke or sang.

If I were going to develop this more, I would ideally convert this into an app, although many apps like this one already exist. However, the idea for this app stems from mimicry, so the more-the merrier, and the more banal the better.

In fact, I think the best life for this piece would be a gallery space and there would be no technology involved whatsoever. It will simply be a canvas displaying this exact picture:

More of the Same

Here is the code of More of the Same which can also be downloaded by clicking the link provided above:

More of the Same Code

Mimicry Theory

In looking more into anthropomorphism, and personification throughout history, I became more infatuated with the idea of mimicry.

In particular the patch I am creating basically mimics a human voice. However, there are tons of factories that build machines that aim to mimic human mechanisms…. ultimately replacing jobs that humans used to do.

And all newer digital technologies are more advanced versions of older, mechanistic technologies…. which begs the question of whether or not we are actually progressing as a society or just ruminating in our own thoughts; mimicking the ideas of our grandfathers and slapping a corporate symbol on it.

In some ways, I wonder if our obsession with technological advancements in HCI and robotics steams from an egotistic inclination to make the world around us serve as a reflection of ourselves.

Anyhow, enough of that jibber jabber. I’ll get to the idea of soul catching…. which is the name of my patch.

So, this idea of mimicry and repetition lead me to the idea of technology being able to capture a person’s soul by personifying them.

If you look through texts about new media (mostly written in the 1960’s) this idea of technology being able to capture a person’s soul runs rampant.

A book that I find most interesting is called The Invention of Morel by Adolfo Casares. If you have a moment, it’s a good, short read.

This book is a great text in many ways, but I’ll just highlight the main theme of the story. Basically, a man becomes obsessed with a woman. He is on an island that seems to be deserted, but he comes to observe this woman everyday. He then sees the woman with her husband. He becomes so obsessed that he films the couple, and then crops out the husband and overlays a video of himself and watches this video repeatedly so that he can live with her eternally.

However, the act of him mediating his body onto film results in the slow decay of his body, leading to his death.

The idea of technology being able to capture one’s soul and kill you is very strongly associated with photography. Partially because it was first used to take pictures of the dead in order to immortalize them and keep them in the family home.

This idea was also associated with voice recordings as well. Antonin Artaud had very interesting ideas about sound recordings and their ability to immortalize the body, or even act as a means of invading another person’s body. He did this by recording himself inflicting self-harm, making guttural noises, and repeating incantations. He is a very important theatrical figure who created Theater of Cruelty, however, I find his recordings and ideas on technology most interesting. It’s the idea that technologies create multiple versions of our body…. and that these versions are more important our actual selves. A terrifying thought, but an interesting one.

I would post a recording of his here, but ubuweb servers ( are currently down! GASP!!! So check it out if you have time. I suppose what this is getting into is the ability for technology to embody us… and vice versa.

However, I think mimicry has a very interesting place in artistic practice today.

A common sense that mimicry takes is comedians doing impressions of people, and audiences tend to find this very amusing. But there is constant debate in the art world about mimicry vs. appropriation in artistic practice. For example, some of the artworks criticized are those of street artists which may copy a painting, like the Mona Lisa, and add a wig, or moustache, etc.

This serves as fuel for conversations about intellectual property and the existence of genius, I think.

As a good artist once said, “Good artists steal. Bad artists borrow.”

This has been the fuel for my patch which I’ve titled, “More of the Same.”

Which literally mimics the pitches that it hears real-time by the spectator.

I hope that it sparks some dialouge about what art is and what it’s role in society is.

Here is a quote from Aristotle’s Poesis that may summarize my meaning:

“inherent in man from his earliest days; he differs from other animals in that he is the most imitative of all creatures, and he learns his earliest lessons by imitation.  Also inborn in all of us is the instinct to enjoy works of imitation”

some great links:

Great history of mimicry:

Interesting thoughts on mimicry and art:

Ideas for Pure Data Exhibit


So many ideas, so little time…..

I was thinking about integrating Pure Data with Blender and using the microphone input to control the movement of 3D objects live…

but then I starting working more with Blender and thought…. goodness gracious… this is going to take some getting used to….

Pure Data also has 3D graphic capabilities through the GEM object library (using OpenGL), but I’m beginning to hate 3D modelling in general… sooooo many vectors. It’s really making me more OCD than I already am.

So I thought, I’ll focus on sound for this one.

I started making a patch during class that can turn microphone input into MIDI data and play it back.

At first I was just using the volume level of the input and mapped those values onto MIDI values, which had a surprisingly good effect, but could certainly be better.

So I figured I would beef up the patch I already started for the project and use fiddle to determine the pitches of incoming microphone data and play it back as musical frequencies.

Song birds come to mind…. but I want to find a more theoretically justifiable idea behind this idea of mockery… especially when technology is mimicking humans.  Anthropomorphism and personification come to mind. Humans generally seem to love things that remind them of themselves. Cats wearing hats, parrots, dogs that flush the toilet.

But I’ll dig deeper into this idea in regards to the implications it has with technology and see what I dig up.


Sonic Reflections

Without further a-do, here is the product of my labour!

the Sonic Reflections Installation piece displayed on 14 March, 2012.

The annotated processing code can be downloaded by clicking this: SonicReflections_Final

And further information about this project can be found HERE or by downloading this handout: SonicR_Handout.

A VIMEO video of the project will be uploaded ASAP.

If you want to run the code on your own computer, please remember to change the name of the camera in the code to the camera on your computer.

You can find the name of your camera by running the following code in processing:


void setup() {
  size(200, 200);

You will see the name in quotations in the serial monitor. Replace the line:
 myCapture = new Capture(this, width, height, "USB Video Device-WDM", 30);

with the name of your camera in quotations. 

Here is the technical schematic:

Technical Schematic


Sonic Reflections_Mock-Up

In working with the theme of consciousness and perception, I have chosen to make the spectator more conscious of how sound effects our perception of the space around  them and themselves.
Physical spaces and bodies are usually thought of and represented visually, but both sound and sight are the primary senses that are used to create our perception of the space around us on a continual basis.
When the human race had to hunt to survive, our ears were equally as important as our sight.
In this project, I have incorporated the sound in the room as part of the visual representation of what is before the camera.

For this installation, I want the viewer to sit  in front of the camera so that they are perceiving themselves sonically and visually as a means of re-evaluating their perception of themselves within their sonic surroundings.

Here is a schematic for how the installation would be seen and used in a gallery setting by the spectator:

Spectator interacting with installation

Spectator percieves themselves on the screen

Spectator finds that their reflection can be altered by the sound in the room, (i.e. their voice)

Proposed future development of project

If I were to develop this piece further for a gallery, I would project live camera footage of the room on a wall of the room, and place contact microphones around the perimeter and in the centre of the room. I would use the microphone data to alter the projected camera image dynamically. This would open the scope of sonic perception to everything and everyone in the space.

Current Problems with Sonic Reflection

I was able to get and display the microphone data visually and real time quite simple, which is good

however mapping it on to the pixels and getting the effect I want will be another story.

But for the moment I have the camera displaying and visual effects created by the microphone

data overlain upon the camera capture.

The camera capture is lagging behind though!!!! I don’t know if this is a limit of the processing capabilities itself, or the operating system.

I can’t run this program in my Linux (which I’m livid about), and am using my Windows virtual machine instead. I have Quicktime and the WinVIDIG installed, etc. and I am running the frames at 30, but it is lagging behind my quite a few minutes….

I’ll read some more troubleshoots to see what, if anything, I can do about it.

Next, I just have to read up on pixels and how to alter them dynamically with the microphone input.

However, what is tripping me us is WHAT to do with the pixels exactly. It’s one of those philosophical issues where I think… well, there is no correct way to visualize sound because you can’t see it and therefore there is no physical model or goal… it’s just interpretation… I think I will go with waves… because waves are scientifically sound (frequencies, oscillation, and the like) but creating something that looks like waves will be challenging for me because my mind doesn’t work visually that well…

Hopefully Tinker Tuesdays will help!

Tactile Sounds – a virtual exhibition

This exhibition shows how some artists have interpreted and transformed a users experience of sound by using our tactile and visual senses to enhance our experience of the music.

These artist don’t allow music to be listened to placidly, but instead engage the user to interact with sound through tactile interfaces and specific body gestures to create sounds.

Jeff Hoefslinked and Stijn Schiffeleer – Beat Blocks, 2006.

This piece is a tangible sound interface which uses blocks with light blue tape to denote rhythmic phrases. by the placement and number of stripes on the block. Where the user places the block on the board will determine the timbre, or instrument used to play that specific rhythmic phrase.

In the first example shown in the video, the block with two stripes is playing two eight notes within the first note of a 4 note measure (4/4).

This piece eliminates the digital abstraction of beat machines and music programs, and makes the rhythmic notation and instrumental mixing board tactile.

Click here to view the embedded video.

David Bouchard – Sound Mites, 2007

This piece transforms a sound by employing a community of musical boxes which express qualities that give them a sense of personality and artificial intelligence.

These boxes can only create sound when they are used in a group because their pitch changes dynamically based upon their placement within the larger community of boxes.

This logic gives them a social quality which visualizes our own human interactions through sound, LED brightness, and spacial relationships.

Click here to view the embedded video.

Steve Mann and Chris Aimone – FUNtain Hydraulophones, 2007

This piece takes tactile sound experiments out of the gallery and even out of digital technology by placing a musical sculpture into a city setting where anyone can interact with it on a daily basis.

This sculpture allows you to “play water” in the same way one would a flute. The best part is that the sculpture creates sound through innovative mechanical engineering which reawakens the sound capabilities of objects and architectures around us, outside the computer screen.

Click here to view the embedded video.

Martijn van Wagtendonk – Trickle – Museum of Contemporary Art of Georgia, 2009

Martijn creates an immersive environment for the spectator to explore and based upon the audiences location in the installation, various kinetic, lighting, audio, and visual elements will be triggered which effect the spectators experience of the sound in the space.

Here, the spacial properties of sound are illuminated by the spectator’s placement within the installation and the architecture and physical installation itself. Although a bit extraordinary, I think this installation  reminds the spectator that the sounds around us are constantly changing, but sometimes, we have to change our physical location in order to discover them.

Click here to view the embedded video.

Acura and Jonathan Jarvis – Interactive Oracle – 2006/2007

This is an example of how touching a table and using body gestures produces, not only music, but also visual representations of their gestures and the sounds produces.

The users also have control of the sounds produced, and the visuals by selecting various categories on the rim of the table.

It also allows more than one person to play music at the same time which makes it a viable instrument.  in performance.

Click here to view the embedded video.

The Wonderful World of Arrays and For Loops

Hello All,

Here are the homework assignments for tomorrow with the exception of the oscillating

Lissajous pattern one, which I will try again later, but my brain is having trouble mapping sound

oriented things into visuals…

All of these examples needed appropriate use of Arrays and For Loops. Why?

Because it not only makes things more efficient and easier for you to code.

But it also allows you to customize the same code for other purposes by changing variables

instead of random parameters that map onto a value that you’ve already forgotten.

Why write a thousand lines when you can write two???

Anyhow. Here is a picture of my look alike for A1. Simple but effective. I started doing the others, but

they all pretty much implement the same concept, so why kick a dead horse?

My A1 Version

And here is my code for it. I am sure it can be more efficient if I were better at math:

// Assignment to create a set of drawings like those in this folder – Try and recreate the drawings A1, A2, A3

// Suggested steps in developing the code:
//  a) recreate using the most obvious means – drawing primitives and coordinates – laborious but effective
//  b) replace repetitive steps by for loops
//  c) replace parameters by variables, and place these at the top of the program

//This Code will create circles in diagonals from the top left to bottom right, vice versa

float[ ] myCircleX;
float[ ] myCircleY;
float[ ] myCircleY2; //the inverse of myCircleY to make 2nd diagonal
float distX; //the distance between the x pos of circle radius
float distY; //the distance bet. the y pos of circle radius

float circleW; //the width of the ellipse
float circleH; //the height of the ellipse
float currentX;
float currentY;
float currentY2;

void setup(){
myCircleX = new float [10];
myCircleY = new float [10];
myCircleY2 = new float [10];
distX = width/10;
distY = height/10;

circleW = (distX/2)-5; //width of ellipse = radius minus offset of 5
circleH = (distY/2)-5;

currentX = circleW+5; //= radius of circle plus 5 offset
currentY = circleH+5;
currentY2 = height-(circleH+5);
//loops fills in x values for circles
for(int i=0; i<myCircleX.length; i++){
myCircleX[i] = currentX;
currentX = currentX + distX;

//fill in y circle values into array
for(int i=0; i<myCircleY.length; i++){
myCircleY[i] = currentY;
currentY = currentY + distY;

for(int i=0; i<myCircleY2.length; i++){
myCircleY2[i] = currentY2;
currentY2 = currentY2 – distY;


void draw(){

for(int i=0; i<myCircleX.length; i++){
ellipse(myCircleX[i], myCircleY[i], circleW*2, circleH*2);
fill(255, 0, 0);

for(int i=0; i<myCircleX.length; i++){
ellipse(myCircleX[i], myCircleY2[i], circleW*2, circleH*2);
fill(255, 0, 0);


Here is how I made the Dark Star more efficient and connected the beginning and ending points.

It’s all just replacing repetitive things with for loops

My Vertices Fix

size(1000,1000);  //size of window
smooth(); //tells the renderer to apply anti-aliasing
background(255,0,0,255);  //set the window background color – the 255 is not strictly necessary!
int [ ] vertexX;
int [ ] vertexY;
vertexX = new int [6];
vertexY = new int [6];



//vertices are used to define the points within a shape
//and then Processing tries to make sense of them in terms of any shape you specify
//if you don’t specify anything – it will just draw lines between each point

//you must always start with ‘beginShape’ and ‘endShape’
beginShape(); //must always have this


//replace the busy work with a for loop to draw the shape
for(int i=0; i<vertexX.length;i++){
vertex(vertexX[i], vertexY[i]);

vertex(250,200);  //first vertex
vertex(750,200);  //next vertex

endShape();  //you always have to do this

//very coyly introducing how you print to the screen (as opposed to the console)
PFont font = loadFont(“CourierNewPSMT-12.vlw”);  //first you grab a font file
textFont(font); //then you tell the program to use that font – until you specify another, for example
fill(0); //font color
//println(“250, 250”);

//replace the bulky text repetition with a classy for loop
for(int i=0; i<vertexX.length;i++){
text((“”+vertexX[i]+”,”+vertexY[i]),vertexX[i], vertexY[i]);

text(“250, 250”,250, 250);  //then you print the text
text(“750, 200”,750, 200);  //then you print the text
text(“300, 500”,300, 500);  //then you print the text
text(“500, 100”,500, 100);  //then you print the text
text(“700, 500”,700, 500);  //then you print the text


If Dinner Tables were DJ Tables

Visuilization of a sound file in Pure Data

These are some random ideas that have been roaming around my head concerning future

possible mini projects leading up to my final project.

First is visualizing food related sounds in order to better understand the relationship between

the visual aesthetics of food and the sound it produces.

Here is an example of what could be done using Pure Data to visualize sound which I took from

a PD tutorial:

I find this interesting because it raises a few questions about:

1) The materiality of sound

2) Truth in data and algorhythms because who is to say which algorhythm is better than another?

Perhaps the best algorhythm would try to recreate the material source of the sound… so that a

sound clip of an apple will look like an apple. Of course, there isn’t much point in this.

I think visualizing sound allows the audience another way to view sound and how it coorelates

to their physical environment.

Mostly another layer of abstraction, from which to view things more clearly.


Another idea is – What if the Dinner Table was a DJ table????

From what I have seen, DJ tables today are actually sensors built to receive DJ’s physical gestures

and interpret them as sound.

Therefore, in focusing on the physical gestures used in DJ’ing what if I dissocociated the typical

expected sounds which are attributed to the sound of an actual vinal being scratched or played..etc.

And replaced it with food related sounds so that one can cook and consume a dinner by using the

same physical gestures as a DJ?


Along this same trajectory of thought, the physical DJ gestures can also produce visuals, food related

or otherwise. Or the gestures can trigger video clips, so one has to play the DJ tables in order to

complete the story.


And, in focusing on DJ gestures, you can also have a dance performance that uses only the gestures

one would use on a turntable… perhaps set to classical music.


With that said, turntables have an abundance of possibilities especially since they can be used to explore

so many theories surrounding new media, remix theory, and embodiment/dissembodiment (liveness).


Some amazing artist who come to mind when thinking about theoretical concepts that can be explored

on turntables are John Cage, Steve Reich, and DJ Spooky.

Here are some videos of contemporary works which use DJing in a creative way.

If I were composing for turntables however, I would think John Cage – Jazz meets 1980’s house music.:D

Click here to view the embedded video.

That’s an example of how the gestures and physical apparatus of DJing are used creatively to produce

unexpected outcome.

Click here to view the embedded video.

There’s an example of a visualization of a music performance which uses circles which resemble

vinyl records.

Click here to view the embedded video.

Here is another visualization of a music performances, except here the correlation between Vinyl

records is clearer as the entire visual light display is occuring on a single disk rotating at the same speed

(i.e. a vinyl record).

There are plenty more examples, but these are what I have found of interest so far.




En yeni muzikleri youtube mp3 indirin.
iddaa Siteleri Tipobet Mobilbahis bahis siteleri mobilbahis adresi