This talk seems very relevant to the concept proposed in my Mobile Media & Media Ecologies presentation around parametric speakers and motion tracking of individuals in a specific location. One of the issues i touched on in the presentation was the problem of the system being overwhelmed by many visitors and everybody being able to experience it. The idea of serendipity actually turns this into a positive thing. Not everybody will experience it but those who do will appreciate the nature of this chance encounter. Word of mouth promotion of this experience will encourage repeat or prolonged visits to the Black Gate site.

The information on the Serendipity talk:

HCI Talk

Venue: Culture Lab Space2, Tuesday 4th May 3pm.

Speaker: Tuck LEONG
Information and Media Studies
Aarhus University

Title: The experience of serendipity in encounters with digital media

People have reported encountering serendipity when they use particular
technologies to interact with personal digital content. Yet no work
has been done to understand this personally engaging and potentially
rich experience as a type of user experience (UX). My talk describes the empirical
investigations undertaken to establish an understanding of this
serendipity as a UX. By focusing upon serendipitous encounters that
arises from people’s digital music listening, I identified the
elements at play, elucidated the properties of the individual
elements, their inter-relationships and forged an understanding of how
they influence the birth of serendipity. In fact under particular
conditions, these elements were found to avail people with
opportunities to encounter this serendipity. This empirically grounded
understanding of serendipity as a UX broadens the concept of UX beyond
its current (limited) focus on experiences of fun and enjoyment.
Besides enriching the concept of UX, it seeds inspiration and furthers
efforts of experience-centered designs.


Tuck was designed and made in Malaysia but grew up in Melbourne,
Australia. Having broad interests, his education (and work) background
includes Music, Immunology and Multimedia prior to his PhD in
Interaction Design at the University of Melbourne. Although he is
interested in technology, he is far more enamored by how people make
sense of their everyday lives when using technology. He is currently a
postdoctoral researcher at the Digital Urban Living Center of Aarhus University.

Comments (0)

Live Electronic Performance Project

Spoke with Will about the ideas for LEP mainly field recodings and parametric speakers and the phone controlling the sustained strings.

We both agreed that introducing hardware at this stage would be too time consuming. To complete this module the following will be investigated / created:

  1. Having the recordings, or place holder to stand in for them
  2. Researching Synthesis in supercollider,
  3. Multichannel outputs – use motu 8 channels and send things out from this
  4. Researching OSC comiong in to control parameters (this would be from CYA in openframeworks)

 Look at creating synth defenitions

Review the synthesis sessions for creating synths that are capable of creating drones etc. then link these synths to pbind so that they will get away from.
Look at changing amplitudes and other parameters etc.

Could potentially analyse field recordings for the “Top 20 Frequencies” – fourier transform of the whole file – do this in Audacity and the Analyse plot spectrum feature. 

or wavesurfer (open source project) – this has a great fourier alagorithm

time scale

data type events

load sound file at certain time

audio rate level – ramps being generated

Routine or task

These are like sequencing systems

Routine can be used to set something up to happen in the future.

Sensor –

Arduino SMS

Parses ascii data from serial port etc.

Convert ascii to integers in

End up with changing value

Look at additive synth stuff from previous sessions

Take a look ath these objects:




These will handle playback.

1 through 8 will output to the motu

ambisonics, azimuth – look at the Ugens list for these objects, spatialization etc

Next session on output routing.

Comments (0)

What is already there, surrounding us…

Voices in the air waiting to be demodulated decoded providing a constantly updating source of sounds, words, content to be manipulated. This could be used as source material and tapped as source for installation. Fed into MAX or Supercollider and manipulated.

Artists e.g. Bill Fonatana in Objective Sound ( have used field recordings microphones linked to audible environmental sound as source. This instead would use demodulated inaudible sound or sound modulated or encoded into frequencies beyond the range of human hearing.

Franks Box  – using radio waves ouptut in echobox and rerecorded, then amplified to communicate with the dead:

Coils attached to telephone wires are able to pick up the sound e.g. the t-coils in hearing aids, spy equipment etc.

Inductors? can these perform the same task

These seem quite old fashioned.

RF sniffers (From Martin Howse workshops) can detect elctromagnetic waves produced by mobile telephones calling or contacting base stations. These could be detected and demodulated as source material.

These sound improbable but:

“1. Use a Bluetooth headset to eavesdrop on someone else’s call. Bluetoothdevices send signals over a long range. If you attach an antenna to a computer and have a headset inserted into your computer, you can eavesdrop on someone else’s phone call.

2.Go to your microwave oven. Punch in a time for about a minute or so and push Start. If you place your ear next to the microwave, you can hear cell phone conversations in the area.

3. Place one cell phone on speakerphone and dial the number to your other phone. Place your second phone on mute. Place the cell phone that’s on speakerphone in the room with the person who you want to eavesdrop on. This person will never know you’re listening to his or her conversations unless he or she discovers the cell phone.

4. Get a CB radio and scan the signals until you pick up a cell phone conversation. You may only find a couple, or you might pick up quite a few. The closer you are to one of the people talking, the more likely you are to pick up their cell phone signal.”


These all sound totally illegal…

Maybe just the coded signals could be used….

Electromagnetic frequencies output by mobile phones when they call or contact the base station could be used to switch on resonators on guitar or strings and play it .

–       need to use MAX to create throb to the output of the sustainer

This would reveal the invisible through sound. It also shows that human perception is limted – we cannot detect electricity like sharks can…

Comments (0)

Fingals Cave

Fingals Cave_Staffa island_NW Scotland_1900

Fingals Cave_Staffa island_NW Scotland_1900

Potential site for field recording to create material for use in LEP project and installation for up and coming Render exhibition.

Take microphones, make multi channel field recordings. Need wind jammer, furry catapillar things.

“Felix Mendelssohn visited in 1829 and wrote Die Hebriden (in English, Hebrides Overture Opus 26, commonly known as Fingal’s Cave overture), inspired by the weird echoes in the cave.[” from Wikipedia’s_Cave

Comments (0)

CYA – Computer Vision

OPEN FRAME WORKS CYA Computer Vision Beta
CCYA-  A Project For Sensing People In Space.

CYA was written in Open Frame Works. I tracks movement and outlines of people. Is capable of recognising multiple people in a video frame and then outputs data as OSC which can be used by oF, Processing, Flash, PD, MAX etc.

I plan to look at this to use for Part 4: Motion Tracking of the MObile Media & Media Ecologies: Voice / Sounds Install Project.

Comments (0)


The concept of serendipidy (finding beautiful things through chance) and the concept of the Voice / Sounds Install not being for everyone who comes to visit the Black Gate – just small beautiful stochastic encounters.

Comments (0)

Gilgarangui First Draft

Picture 6








Picture 6

Comments (0)

Voices / Sounds Proposed Installation at the Black Gate



Following on from the Initial Presentation From The City Council
Ben’s presentation contained four key points:

  • Engage lazy socials using technology
  • 17 – 25 year old who rarely read (even though they are literate it doesn’t interest them)
  • Focus on the Black Gate
  • A already existing project that used mobile phone technology but caused people to walk around constantly looking at their phones rather than the place itself.

I have been looking at the use of technology for mystical and esoteric purposes e.g. contacting and speaking with the dead through Franks Box, EVP’s, and William Burroughs tape recorder experiemnents etc

I have also been looking at ultrasonic effects e.g. the ability of ultrasound to create state change in molecules e.g. water (liquid) to steam (gas).

I am interested in creating and researching sound and visual installations especially those that are immersive and interactive involving multi channels.

I have also been reading contemporary German philosopher Peter Slotdjiks book “Terror From the Air”. Slotdijk argues that all modern art is a form of atmos-terrorism endangering the environment of living beings following on from modernist warfare practices since 1914– this project could be interpreted to cohere with this view point.

I wish to present an immersive interactive sound installation that departs from my usual practice and I would class as “commercial work” i.e. produced to a clients specifications. In this instance Newcastle City Council. The installation proposes to use as an example the story and narrative of the
Scottish Raids on Newcastle that regularly took place during the 13th Century dramtised to increase the impact but using historical facts. This story is to be recorded using multiple tracks (both speech and foley (sound effects). Once recorded tracks will be loaded onto SD cards. Solar powered Arduino boards will play back the tracks from mp3/sound file players. The sound will be broadcast in the Black Gate through Parametric Speaker technology which creates narrow directional beams of sound heard not at the speaker but in the location of the listener. The presentation will be multichannel and multi directional.

Motion tracking software (webcam to computer to software) will “recognise” and track visitors movements through the Black Gate space. Their movements will cause (through the software) servo motors to guide the beams of sound to them as they traverse the space. Where possible the technology will be powered by the responsible and sustainable renewable source of energy – solar power.

The sound will appear to follow the recipient. The narrator will seem to direct their address solely at the individual creating an intimate and memorable experience. This level of personal address will hold and sustain interest in the presentation.

The projected voice and sounds will appear to come out of nowhere.

As a result of this the recipient will retain information and stay for  longer to find out more about the history of the Black Gate.

The installation will utilise multiple speakers to create a layered immersive effect.

I have developed the following key words to guide the development of the project.



To describe the project components in detail I will talk through the following sections:

Historical Story

Research and Inspiration

 Technical Explanation

Examples of Speakers




Potential installation points in the Black Gate

Solar Power

Potential Issues

Taking this forward




Historical Story / Research

“Not a trace of the tower or mound of the motte and bailey castle remains now. Henry II replaced it with a rectangular stone keep, which was built between 1172 and 1177 at a cost of £1,444. A stone bailey, in the form of a triangle, replaced the previous wooden one. The master mason or architect, Maurice, also built Dover Castle. The great outer gateway to the castle, called ‘the Black Gate’, was built later, between 1247 and 1250, in the reign of Henry III. There were at that time no town walls and when attacked by the Scots, the townspeople had to crowd into the bailey for safety. It is probable that the new castle acted as a magnet for local merchants because of the safety it provided in dangerous times.”


The material for this piece would be based on a dramatisation of this description backed up by historical facts. The source of this information is the wikipedia description of  the Castkle Keep and Black Gate.


The fear of the merchants, the attacking Scottish, the closing of the drawbridge etc

To find a suitable image to represent this I searched the net and found the following:

 While vaguely amusing they were quite unsuitable.

I wish to state that I do not believe in reinforcing national stereotypes and that characterisation is for entertainment purposes only and does not represent my personal views on this .



I looked at several projects that used parametric speakers and tracking:


Spiraling Echoes – Bill Fontana


Bird sound moves around a building by parametric speakers on motors.


A=P=P=A=R=I=T=I=O=NCerith Wyn Evans and Throbbing Gristle


Music by Throbbing Gristle is projected from sculptural sound mirrors using parametric speakers


Underscan – Lozanno Hemmer

A bright light casts intense shadows, movement tracking cameras track individuals walking through this area and tracking projectors beam images into the individuals shadows.



Past projects have used parametric speakers but haven’t used movement tracking software to direct the focussed beams of sound making this a unique project developing the field of digital media.


Technical Explanation

How can audible sound be created from ultrasound?

Information and examples of “audio spotlight speakers” from BBC TV Programme Tomorrows World:

BBC audio sonic spotlight speakers info


Parametric speakers were originally developed at MIT


Examples of Speakers

Holosonics cost approximately£2500 ea


Sennheiser cost approximately £3500 ea


Homebrew £50 kit from Japan




The concept is to use Processing software with the Myron library to receive information from webcams process this and then make decisions based on this information. The outcomes will be used to control the postion of the parametric speakers. This will be achieved by communicating with Arduino serially and then Arduino sotware will control servo motors – moving them to allow the parametric speakers to continually be aligne to the visitor. Emotional tactics / various methods of persusasion could be employed – if the recipient (audience member) is moving too quickly a control signal could be sent to the Arduino to play back a track from the MP3 player to implore them or command them to return / walk more slowly / stay etc.



Information on Myron:

Computer Vision for Artists

Myron is the cross-platform, cross-language, open source, video capture and computer vision plugin. One core C++ object gets cross-compiled as a handful of high level language “wrapper” libraries. The wrapper for Java and Processing is called JMyron. The wrapper for Macromedia Director is called WebCamXtra. The aim of the project is to keep computer vision free and easy for the new media education and arts community.



Other options to be considered include:



Eyesweb open platform – gesture and movement analysis (windows only)




Motor Control with Arduino

Servo Motors need to be used as other motors that rotate e.g. dc motorswould cause the connecting wires to become entangled. Servo motors give positional information which would be advantageous to this project.




Here are two suitable examples of hardware for playing back sound that are small enough to be portable and easily installed:


Arduino shields with sound players onboard?

buy kit $22:



better sound but higher price at $99.99


Other options that could be considered include:

Using Max/MSP to create multiple outputs for sample sound files. This would create a multi channel piece with sound effects and voice over narration but it would need a motu sound card for this for all the outputs.


Cd players controlled by arduino.


Tape loops


Zigbee radios could be used to allow multiple arduino s widely spaced out to communicate and work coherently.



Potential installation points in the Black Gate

Under wooden floor – sound coming from under here

Up on walls beamed down

Up in the arch so when you walk through it you get sound accosting you


Solar Power

Creating a self sustaining Arduino using solar panels and a rechargeable battery


Potential Issues


Weather proofing


Updating presentation – with SD cards could update the presentation

Lifespan – for maximum impact could be used as a short stay piece and then toured with different presentations to other sites etc. – ideas for a company here!

Potential vandalism – things thrown at it? Would it be robust enough? – this would need to be tested.

Would the speakers deteriorate outdoors? Needs to be tested. If the ultrasonic actuators are made from nonferrous materials they should not corrode.

What if there were too many visitors for each to get a personal experience? – only the person recieivng the sound is aware, so the others wouldn’t know the difference.

How to make the motion track lock onto one person and not get confused and “flit” between people… this needs to be tested.


Taking this forward


Build prototype parametric speaker

Motion detection with software

Zigbee communication



Caves installation – fine art installation for final project… Fingal – Staffa, NW Scotland and Jordass, Kingsdale, Yorshire.

Comments (0)

Live Electronic Performance/Mobile Media Ecologies

Inspired by Jordas Cave, Kingsdale Yorkshire.
Would like to create an audio visual piece based upon it.
To do:
Make field recordings and create time lapse film in the cave and then present these using parametric speakers rigged to stepper motors. Processing used to track audience so sound follows / or doesn’t follow their movements.
Finish arduino controlled motion control rig.
Visit and record field recordings. (Sunday 25th – Monday 26th April 2010.)
Research Parametric Speakers – can these be diy built?

Information on Jordas Cave
Yordas Cave (from the Norse Jord ass which means earth stream) is located in upper Kingsdale, near Ingleton, on the west side of the Thornton-in Lonsdale to Dent road, 1.5 km north of Braida Garth Farm. The entrance is situated in a small valley within a small plantation about 200 meters from the road. The cave was run as a show cave in Victorian times. The entrance leads directly into the main chamber, about 50m x 15m x 15m high. An active stream runs through the chamber, originating from a stream passage where a nice waterfall can be seen and disappearing into a sumped small passage at the other end of the chamber

Turner created sketches of this cave in 1816
J M W Turner (1816)


Inspiration from Brian Eno and Peter Schmidt – Oblique Strategies:
“Once the search is in progess something will be found”
“Not building a wall, building a brick”

Parametric Speakers
I would like to develop and build parametric speakers:
The ultrasonic carrier is projected into the listening area, where it’s demodulated using a non-linear environment, which can be the listener’s ear, the air, or a special material such as a fibrous fabric.
“Some audio engineers, conclude that the non-linear demodulation doesn’t happen in the air but, in fact, occurs due to high sound pressure level non-linearities in the listener’s ear,”

Summary of how directional speakers work

The piezoelectric transducers (grey circles) in the directional speaker produce two ultrasonic waves (red and blue), both of which are at frequencies way too high to hear). The transducers pump out the waves in a focused column (like the light in a flashlight beam). The waves are actually modulated (like radio waves) and travel as one wave, but it’s simplest to imagine them as two quite separate waves.
When the two waves hit something (or someone), they slow down and demodulate, producing a new wave (green) whose frequency is much lower—equal to the difference in frequencies between the two original waves. This is a wave you can hear.
When there’s no-one standing in the beam, the waves keep on travelling without producing an audible sound wave—so if there’s no-one standing in front of the speaker, there’s nothing you can hear.
People standing outside the beam can’t hear anything because (unlike with a conventional loudspeaker) the sound waves are not diverging from the source of sound to reach their ears.

Parametric Speaker links

Processing or MAX – object tracking / video tracking
Need to compare both programs to see which is the best – EYESWEB could be useful.

Homebrew on ebay:

Comments (0)