Nina Limardo – My Work. My Art. My Life.

The “More of the Same” PD Patch

12.06.2012 (2:07 pm) – Filed under: DOING,VIEW ALL POSTS

Here is my More of the Same PD patch for the exhibit tomorrow. It’s a bit tongue and cheek.

You can download the code by clicking here.

Just open the MoreOfTheSame.pd file to try it out. The SoulCatcher.pd is simply the code which is mapped onto the parent file.

To view the theoretical ideas that sparked the creation of this patch, please click here.

Mostly, I hope this sparks some dialogue about art, progress, and art’s purpose.

What the patch does when the button is clicked is track the pitch heard on the microphone for two seconds and output the pitches as MIDI. These pitches are then converted into frequencies and stored into a table, then played back to the spectator as a computer generated version of what they just spoke or sang.

If I were going to develop this more, I would ideally convert this into an app, although many apps like this one already exist. However, the idea for this app stems from mimicry, so the more-the merrier, and the more banal the better.

In fact, I think the best life for this piece would be a gallery space and there would be no technology involved whatsoever. It will simply be a canvas displaying this exact picture:

More of the Same

Here is the code of More of the Same which can also be downloaded by clicking the link provided above:

More of the Same Code

Mimicry Theory

10.06.2012 (1:19 pm) – Filed under: DOING,VIEW ALL POSTS

In looking more into anthropomorphism, and personification throughout history, I became more infatuated with the idea of mimicry.

In particular the patch I am creating basically mimics a human voice. However, there are tons of factories that build machines that aim to mimic human mechanisms…. ultimately replacing jobs that humans used to do.

And all newer digital technologies are more advanced versions of older, mechanistic technologies…. which begs the question of whether or not we are actually progressing as a society or just ruminating in our own thoughts; mimicking the ideas of our grandfathers and slapping a corporate symbol on it.

In some ways, I wonder if our obsession with technological advancements in HCI and robotics steams from an egotistic inclination to make the world around us serve as a reflection of ourselves.

Anyhow, enough of that jibber jabber. I’ll get to the idea of soul catching…. which is the name of my patch.

So, this idea of mimicry and repetition lead me to the idea of technology being able to capture a person’s soul by personifying them.

If you look through texts about new media (mostly written in the 1960’s) this idea of technology being able to capture a person’s soul runs rampant.

A book that I find most interesting is called The Invention of Morel by Adolfo Casares. If you have a moment, it’s a good, short read.

This book is a great text in many ways, but I’ll just highlight the main theme of the story. Basically, a man becomes obsessed with a woman. He is on an island that seems to be deserted, but he comes to observe this woman everyday. He then sees the woman with her husband. He becomes so obsessed that he films the couple, and then crops out the husband and overlays a video of himself and watches this video repeatedly so that he can live with her eternally.

However, the act of him mediating his body onto film results in the slow decay of his body, leading to his death.

The idea of technology being able to capture one’s soul and kill you is very strongly associated with photography. Partially because it was first used to take pictures of the dead in order to immortalize them and keep them in the family home.

This idea was also associated with voice recordings as well. Antonin Artaud had very interesting ideas about sound recordings and their ability to immortalize the body, or even act as a means of invading another person’s body. He did this by recording himself inflicting self-harm, making guttural noises, and repeating incantations. He is a very important theatrical figure who created Theater of Cruelty, however, I find his recordings and ideas on technology most interesting. It’s the idea that technologies create multiple versions of our body…. and that these versions are more important our actual selves. A terrifying thought, but an interesting one.

I would post a recording of his here, but ubuweb servers (ubu.com) are currently down! GASP!!! So check it out if you have time. I suppose what this is getting into is the ability for technology to embody us… and vice versa.

However, I think mimicry has a very interesting place in artistic practice today.

A common sense that mimicry takes is comedians doing impressions of people, and audiences tend to find this very amusing. But there is constant debate in the art world about mimicry vs. appropriation in artistic practice. For example, some of the artworks criticized are those of street artists which may copy a painting, like the Mona Lisa, and add a wig, or moustache, etc.

This serves as fuel for conversations about intellectual property and the existence of genius, I think.

As a good artist once said, “Good artists steal. Bad artists borrow.”

This has been the fuel for my patch which I’ve titled, “More of the Same.”

Which literally mimics the pitches that it hears real-time by the spectator.

I hope that it sparks some dialouge about what art is and what it’s role in society is.

Here is a quote from Aristotle’s Poesis that may summarize my meaning:

“inherent in man from his earliest days; he differs from other animals in that he is the most imitative of all creatures, and he learns his earliest lessons by imitation.  Also inborn in all of us is the instinct to enjoy works of imitation”

some great links:

Great history of mimicry:

http://csmt.uchicago.edu/glossary2004/mimesis.htm

Interesting thoughts on mimicry and art:

http://www.artandeducation.net/paper/the-mimicry-of-artistic-practices-in-not-a-novelty-why-art-institutions-still-lack-a-method-to-support-this-phenomenon/

http://artquill.blogspot.co.uk/2011/04/is-it-appropriation-or-mimicry-art.html

http://www.npr.org/2011/06/14/137111305/the-art-of-mimicry-a-trip-down-memory-lane

Ideas for Pure Data Exhibit

07.06.2012 (12:24 pm) – Filed under: DOING,VIEW ALL POSTS

So many ideas, so little time…..

I was thinking about integrating Pure Data with Blender and using the microphone input to control the movement of 3D objects live…

but then I starting working more with Blender and thought…. goodness gracious… this is going to take some getting used to….

Pure Data also has 3D graphic capabilities through the GEM object library (using OpenGL), but I’m beginning to hate 3D modelling in general… sooooo many vectors. It’s really making me more OCD than I already am.

So I thought, I’ll focus on sound for this one.

I started making a patch during class that can turn microphone input into MIDI data and play it back.

At first I was just using the volume level of the input and mapped those values onto MIDI values, which had a surprisingly good effect, but could certainly be better.

So I figured I would beef up the patch I already started for the project and use fiddle to determine the pitches of incoming microphone data and play it back as musical frequencies.

Song birds come to mind…. but I want to find a more theoretically justifiable idea behind this idea of mockery… especially when technology is mimicking humans.  Anthropomorphism and personification come to mind. Humans generally seem to love things that remind them of themselves. Cats wearing hats, parrots, dogs that flush the toilet.

But I’ll dig deeper into this idea in regards to the implications it has with technology and see what I dig up.

Cheers! 😀

Sonic Reflections

14.03.2012 (1:44 am) – Filed under: VIEW ALL POSTS

Without further a-do, here is the product of my labour!

the Sonic Reflections Installation piece displayed on 14 March, 2012.

The annotated processing code can be downloaded by clicking this: SonicReflections_Final

And further information about this project can be found HERE or by downloading this handout: SonicR_Handout.

A VIMEO video of the project will be uploaded ASAP.

If you want to run the code on your own computer, please remember to change the name of the camera in the code to the camera on your computer.

You can find the name of your camera by running the following code in processing:

import processing.video.*; 

void setup() {
  size(200, 200);
  println(Capture.list());
}

You will see the name in quotations in the serial monitor. Replace the line:
 myCapture = new Capture(this, width, height, "USB Video Device-WDM", 30);

with the name of your camera in quotations. 

Here is the technical schematic:

Technical Schematic


	

Sonic Reflections_Mock-Up

14.03.2012 (1:00 am) – Filed under: DOING,VIEW ALL POSTS

In working with the theme of consciousness and perception, I have chosen to make the spectator more conscious of how sound effects our perception of the space around  them and themselves.
Physical spaces and bodies are usually thought of and represented visually, but both sound and sight are the primary senses that are used to create our perception of the space around us on a continual basis.
When the human race had to hunt to survive, our ears were equally as important as our sight.
In this project, I have incorporated the sound in the room as part of the visual representation of what is before the camera.

For this installation, I want the viewer to sit  in front of the camera so that they are perceiving themselves sonically and visually as a means of re-evaluating their perception of themselves within their sonic surroundings.

Here is a schematic for how the installation would be seen and used in a gallery setting by the spectator:

Spectator interacting with installation

Spectator percieves themselves on the screen

Spectator finds that their reflection can be altered by the sound in the room, (i.e. their voice)

Proposed future development of project

If I were to develop this piece further for a gallery, I would project live camera footage of the room on a wall of the room, and place contact microphones around the perimeter and in the centre of the room. I would use the microphone data to alter the projected camera image dynamically. This would open the scope of sonic perception to everything and everyone in the space.

Current Problems with Sonic Reflection

05.03.2012 (10:52 am) – Filed under: VIEW ALL POSTS

I was able to get and display the microphone data visually and real time quite simple, which is good

however mapping it on to the pixels and getting the effect I want will be another story.

But for the moment I have the camera displaying and visual effects created by the microphone

data overlain upon the camera capture.

The camera capture is lagging behind though!!!! I don’t know if this is a limit of the processing capabilities itself, or the operating system.

I can’t run this program in my Linux (which I’m livid about), and am using my Windows virtual machine instead. I have Quicktime and the WinVIDIG installed, etc. and I am running the frames at 30, but it is lagging behind my quite a few minutes….

I’ll read some more troubleshoots to see what, if anything, I can do about it.

Next, I just have to read up on pixels and how to alter them dynamically with the microphone input.

However, what is tripping me us is WHAT to do with the pixels exactly. It’s one of those philosophical issues where I think… well, there is no correct way to visualize sound because you can’t see it and therefore there is no physical model or goal… it’s just interpretation… I think I will go with waves… because waves are scientifically sound (frequencies, oscillation, and the like) but creating something that looks like waves will be challenging for me because my mind doesn’t work visually that well…

Hopefully Tinker Tuesdays will help!

Creating a Sonic Camera

01.03.2012 (10:42 am) – Filed under: DOING

For the exhibition on March 14th, 2012, I have finally deciding on creating a Sonic Camera.

Or perhaps I’ll call it a Sonic Reflection.

I have done some programming tests and found that I can definitely create it within the time period given although it may not be as sleek and sexy as I’d like at the end… but it will work.

Basically, my idea is that I want the spectator to, not only, perceive themselves visually, but sonically.

In other words, I want the sound in the room and the sounds they make, to alter their visual reflection/ perception of themselves and the room around them.

So, when it’s set up, it will just be an open laptop on a table with a chair in front of it inviting the spectator to sit down.

when they sit down, they will see themselves on the laptop from the live camera feed, however, they will notice that the sound around them, and the sound they make alters their reflection.

So, when set up, it will look something like this when the spectator enters the room:

Ex. Spectator interacting with their Sonic Reflection

And it won’t be until they get closer and take a seat that they’ll begin to perceive what is going on.

The best part is that it is happening dynamically and the spectator is able to perceive themselves differently- through sound.

On the programming end of it:

I will be using the microphone and camera data real-time.

The microphone array data will be gathered and dynamically effect the pixels of the camera, real-time.

As of yet, getting and displaying the microphone data and capturing the camera on screen will be fine, I think.

However, getting the right effect and altering the pixels may be a bit more complex… but I’ll just have to try the pixels commands out and see how to do it.

I am sure I can have something finished enough at the end.

Either way, it’s an exciting challenge for me.

Tactile Sounds – a virtual exhibition

28.02.2012 (3:06 pm) – Filed under: THINKING,VIEW ALL POSTS

This exhibition shows how some artists have interpreted and transformed a users experience of sound by using our tactile and visual senses to enhance our experience of the music.

These artist don’t allow music to be listened to placidly, but instead engage the user to interact with sound through tactile interfaces and specific body gestures to create sounds.


Jeff Hoefslinked and Stijn Schiffeleer – Beat Blocks, 2006.

This piece is a tangible sound interface which uses blocks with light blue tape to denote rhythmic phrases. by the placement and number of stripes on the block. Where the user places the block on the board will determine the timbre, or instrument used to play that specific rhythmic phrase.

In the first example shown in the video, the block with two stripes is playing two eight notes within the first note of a 4 note measure (4/4).

This piece eliminates the digital abstraction of beat machines and music programs, and makes the rhythmic notation and instrumental mixing board tactile.

YouTube Preview Image

David Bouchard – Sound Mites, 2007

This piece transforms a sound by employing a community of musical boxes which express qualities that give them a sense of personality and artificial intelligence.

These boxes can only create sound when they are used in a group because their pitch changes dynamically based upon their placement within the larger community of boxes.

This logic gives them a social quality which visualizes our own human interactions through sound, LED brightness, and spacial relationships.

YouTube Preview Image

Steve Mann and Chris Aimone – FUNtain Hydraulophones, 2007

This piece takes tactile sound experiments out of the gallery and even out of digital technology by placing a musical sculpture into a city setting where anyone can interact with it on a daily basis.

This sculpture allows you to “play water” in the same way one would a flute. The best part is that the sculpture creates sound through innovative mechanical engineering which reawakens the sound capabilities of objects and architectures around us, outside the computer screen.

YouTube Preview Image

Martijn van Wagtendonk – Trickle – Museum of Contemporary Art of Georgia, 2009

Martijn creates an immersive environment for the spectator to explore and based upon the audiences location in the installation, various kinetic, lighting, audio, and visual elements will be triggered which effect the spectators experience of the sound in the space.

Here, the spacial properties of sound are illuminated by the spectator’s placement within the installation and the architecture and physical installation itself. Although a bit extraordinary, I think this installation  reminds the spectator that the sounds around us are constantly changing, but sometimes, we have to change our physical location in order to discover them.

YouTube Preview Image

Acura and Jonathan Jarvis – Interactive Oracle – 2006/2007

This is an example of how touching a table and using body gestures produces, not only music, but also visual representations of their gestures and the sounds produces.

The users also have control of the sounds produced, and the visuals by selecting various categories on the rim of the table.

It also allows more than one person to play music at the same time which makes it a viable instrument.  in performance.

YouTube Preview Image

The Wonderful World of Arrays and For Loops

21.02.2012 (5:19 pm) – Filed under: DOING,VIEW ALL POSTS

Hello All,

Here are the homework assignments for tomorrow with the exception of the oscillating

Lissajous pattern one, which I will try again later, but my brain is having trouble mapping sound

oriented things into visuals…

All of these examples needed appropriate use of Arrays and For Loops. Why?

Because it not only makes things more efficient and easier for you to code.

But it also allows you to customize the same code for other purposes by changing variables

instead of random parameters that map onto a value that you’ve already forgotten.

Why write a thousand lines when you can write two???

Anyhow. Here is a picture of my look alike for A1. Simple but effective. I started doing the others, but

they all pretty much implement the same concept, so why kick a dead horse?

My A1 Version

And here is my code for it. I am sure it can be more efficient if I were better at math:


// Assignment to create a set of drawings like those in this folder – Try and recreate the drawings A1, A2, A3

// Suggested steps in developing the code:
//  a) recreate using the most obvious means – drawing primitives and coordinates – laborious but effective
//  b) replace repetitive steps by for loops
//  c) replace parameters by variables, and place these at the top of the program

//This Code will create circles in diagonals from the top left to bottom right, vice versa

float[ ] myCircleX;
float[ ] myCircleY;
float[ ] myCircleY2; //the inverse of myCircleY to make 2nd diagonal
float distX; //the distance between the x pos of circle radius
float distY; //the distance bet. the y pos of circle radius

float circleW; //the width of the ellipse
float circleH; //the height of the ellipse
float currentX;
float currentY;
float currentY2;

void setup(){
size(500,500);
myCircleX = new float [10];
myCircleY = new float [10];
myCircleY2 = new float [10];
distX = width/10;
distY = height/10;

circleW = (distX/2)-5; //width of ellipse = radius minus offset of 5
circleH = (distY/2)-5;

currentX = circleW+5; //= radius of circle plus 5 offset
currentY = circleH+5;
currentY2 = height-(circleH+5);
//loops fills in x values for circles
for(int i=0; i<myCircleX.length; i++){
myCircleX[i] = currentX;
currentX = currentX + distX;
}

//fill in y circle values into array
for(int i=0; i<myCircleY.length; i++){
myCircleY[i] = currentY;
currentY = currentY + distY;
}

for(int i=0; i<myCircleY2.length; i++){
myCircleY2[i] = currentY2;
currentY2 = currentY2 – distY;
}

}

void draw(){
background(0);

for(int i=0; i<myCircleX.length; i++){
ellipse(myCircleX[i], myCircleY[i], circleW*2, circleH*2);
fill(255, 0, 0);
}

for(int i=0; i<myCircleX.length; i++){
ellipse(myCircleX[i], myCircleY2[i], circleW*2, circleH*2);
fill(255, 0, 0);
}

}

Here is how I made the Dark Star more efficient and connected the beginning and ending points.

It’s all just replacing repetitive things with for loops

My Vertices Fix


size(1000,1000);  //size of window
smooth(); //tells the renderer to apply anti-aliasing
background(255,0,0,255);  //set the window background color – the 255 is not strictly necessary!
int [ ] vertexX;
int [ ] vertexY;
vertexX = new int [6];
vertexY = new int [6];

vertexX[0]=250;
vertexX[1]=750;
vertexX[2]=300;
vertexX[3]=500;
vertexX[4]=700;
vertexX[5]=250;

vertexY[0]=200;
vertexY[1]=200;
vertexY[2]=500;
vertexY[3]=100;
vertexY[4]=500;
vertexY[5]=200;

//vertices are used to define the points within a shape
//and then Processing tries to make sense of them in terms of any shape you specify
//if you don’t specify anything – it will just draw lines between each point

//you must always start with ‘beginShape’ and ‘endShape’
beginShape(); //must always have this

noFill();
strokeWeight(3.0);

//replace the busy work with a for loop to draw the shape
for(int i=0; i<vertexX.length;i++){
vertex(vertexX[i], vertexY[i]);
}

/*
vertex(250,200);  //first vertex
vertex(750,200);  //next vertex
vertex(300,500);
vertex(500,100);
vertex(700,500);
vertex(250,200);
*/

endShape();  //you always have to do this

//very coyly introducing how you print to the screen (as opposed to the console)
PFont font = loadFont(“CourierNewPSMT-12.vlw”);  //first you grab a font file
textFont(font); //then you tell the program to use that font – until you specify another, for example
fill(0); //font color
//println(“250, 250”);

//replace the bulky text repetition with a classy for loop
for(int i=0; i<vertexX.length;i++){
text((“”+vertexX[i]+”,”+vertexY[i]),vertexX[i], vertexY[i]);
}

/*
text(“250, 250”,250, 250);  //then you print the text
text(“750, 200”,750, 200);  //then you print the text
text(“300, 500”,300, 500);  //then you print the text
text(“500, 100”,500, 100);  //then you print the text
text(“700, 500”,700, 500);  //then you print the text

*/

If Dinner Tables were DJ Tables

15.02.2012 (10:46 am) – Filed under: FINAL PROJECT,VIEW ALL POSTS

These are some random ideas that have been roaming around my head concerning future

possible mini projects leading up to my final project.

First is visualizing food related sounds in order to better understand the relationship between

the visual aesthetics of food and the sound it produces.

Here is an example of what could be done using Pure Data to visualize sound which I took from

a PD tutorial:

Visuilization of a sound file in Pure Data

PD algorhytm for visualization

I find this interesting because it raises a few questions about:

1) The materiality of sound

2) Truth in data and algorhythms because who is to say which algorhythm is better than another?

Perhaps the best algorhythm would try to recreate the material source of the sound… so that a

sound clip of an apple will look like an apple. Of course, there isn’t much point in this.

I think visualizing sound allows the audience another way to view sound and how it coorelates

to their physical environment.

Mostly another layer of abstraction, from which to view things more clearly.

 

Another idea is – What if the Dinner Table was a DJ table????

From what I have seen, DJ tables today are actually sensors built to receive DJ’s physical gestures

and interpret them as sound.

Therefore, in focusing on the physical gestures used in DJ’ing what if I dissocociated the typical

expected sounds which are attributed to the sound of an actual vinal being scratched or played..etc.

And replaced it with food related sounds so that one can cook and consume a dinner by using the

same physical gestures as a DJ?

 

Along this same trajectory of thought, the physical DJ gestures can also produce visuals, food related

or otherwise. Or the gestures can trigger video clips, so one has to play the DJ tables in order to

complete the story.

 

And, in focusing on DJ gestures, you can also have a dance performance that uses only the gestures

one would use on a turntable… perhaps set to classical music.

 

With that said, turntables have an abundance of possibilities especially since they can be used to explore

so many theories surrounding new media, remix theory, and embodiment/dissembodiment (liveness).

 

Some amazing artist who come to mind when thinking about theoretical concepts that can be explored

on turntables are John Cage, Steve Reich, and DJ Spooky.

Here are some videos of contemporary works which use DJing in a creative way.

If I were composing for turntables however, I would think John Cage – Jazz meets 1980’s house music.:D

YouTube Preview Image

That’s an example of how the gestures and physical apparatus of DJing are used creatively to produce

unexpected outcome.

YouTube Preview Image

There’s an example of a visualization of a music performance which uses circles which resemble

vinyl records.

YouTube Preview Image

Here is another visualization of a music performances, except here the correlation between Vinyl

records is clearer as the entire visual light display is occuring on a single disk rotating at the same speed

(i.e. a vinyl record).

There are plenty more examples, but these are what I have found of interest so far.

Enjoy!