Live Electronic Performance project

Initial ideas…

Perfect 6 sounds?

Field recordings
– hummingbirds wing [very close]
– honeybee
– frogs
– water droplet
– crickets
– whales

Contrast with inversion (harsher points of spectrum)
– trains
– hustle bustle
– steel works
– chainsaw
– demolition
– catherine wheel

Algorithmic composition?
Iterative production

Concept that formed..

Divide pixel array up into rectangles (16?)

Top half (8) nature related
Bottom half (8) industrial

Each rectangle triggers fields recording loop

Proposal…

I wish to use field recordings from nature and opposing urban/industrial environments.  Contrasting such with an equal amount of synthetic sounds.
I plan to create a field of these by splitting the frame of vision/pixel array from a camera into 16 equal sections, so that different points trigger and mix different recordings.  Moving hands as if swimming through a sea of exploration reveals different findings and sources.  This also applies different effects.
The closer the hands are to the camera the more frames it will cover and more layers will mix, so if the camera gets completely covered then all recordings will be played at once, without effects.
The pixel array will be divided into 16, which represent the samples/recordings.  And the screen is further divided in half cross sections both vertically and horizontally.  Vertically classed as Left (L) and Right (R), and horizontally Top (T) and Bottom (B).  Left associates to the urban/industrial recordings/samples, and the Right associates to the Nature sounds.  The Upper section further corresponds to the synthetic sounds, and the Lower section to the live field recordings.  This is shown in the diagram below:

Further relations to the sections are the effects:
L + R  = Spectral inversion
T + B = Pitch shift (0/ > + > -)

Live Electronic Performance samples

Field recordings and synthetic sound samples.

Recorded, produced and sourced [from FreeSoundProject and SiBegg/Noodle Recordings] for the LEP module:

WHALE SEX

HUMMINGBIRD WING (EDIT)

FROG CACOPHONY (Recorded on my walk as assigned in LEP then using the lengthy recording I mixed and overlayed the tracks to create a more active sounding environment and tweaked and edited it by changing the pitch of certain frogs.)

BUBBLE POP

BUBBLE POP ECHO SLOW

FIZZ DRY (Also taken from another walk as assigned in LEP then edited)

FIZZ SLOW

FIZZ WAH

TIME STAMP EDIT

202_Q_Dying

EUarp

EUarp Rezdelay

EUdoor

EUloop

Fbak

Fuel Bass

Logic crash

bloops1

break2

ElectroFight

hats1

cuts1 FX

cuts3

bass1

weird Pad2

Strings rev

Synths rev

stories from the wishing well

If you could chose a story for someone to tell or read to you, what would it be?  Any ideas, answer with a comment

 

little stories

Eve and drone

looking at stories – being read to – the comfort of it – the sound of a voice when it is telling a story or reading – thinking about groups of people telling stories – thinking about a collection of stories being told in a cacophony – individual sounds from headphones

meditation on a future horizon


Brandon LaBelle

Revolutionary statements make a claim onto history to charge a given time and place with radical energy: to galvanize the masses, to overturn social behavior, to disrupt and ultimately transform reality. Such statements act as momentary bursts of outrage and political conscientiousness, giving definition to the here and now as a time in need of rupture.

Exploring revolutionary desire as a temporal moment, the project examines various historical texts and statements calling for social transformation. From Situationist graffiti to Black Caribbean rights, the statements are translated into random melodies using a music box mechanism: writing the statements out across musical staff paper, each melody turns into a lyrical homage to revolutions gone by, as well as suggests links between art production, as a project of reworking time, and the revolutionary moment as a recurring intensity throughout history.

The project is presented as a series of video works capturing the gesture of producing the melodies. These are shown on a set of laptops within an installation setting acting as a working space containing related documents, materials, books, CDs and artefacts.

Exhibited at Mario Mazzoli galerie, Berlin
January 20 – March 22, 2011

cornell and archiving/collecting and object poems

been looking at Joseph Cornell’s boxes and how the objects make connections with eachother. It has made me think about archiving work, representing it as an archive or collection of disperate objects which are connected by seeing or hearing.  thinking of digital ways to re-present, make connections between experiences.

 

stereoscopic .gif attempt

(click image to view .gif)

this image is created by taking two still photographs of the same subject, from two slightly differing viewpoints… technically it works best if the distance is around the same as the distance between your eyes… but we’ll see about that.

i have been wanting to make a stereoscopic animated .gif for a while now, i like the lo-fi 3-dness, no glasses, no x/y/z, rendering etc… just 2 photos slammed together on a loop. the source photos were taken on an iphone 4, with no precision regarding angles and composition.

i am pretty happy with the result, and i think the KIRAKIRA subject matter is appropriate for the jerky/glitchy imagery.

MIP: System Development

For this project, I wanted to develop a system that was a development of the idea i’d had for an audio filter for mobile use. However, given the project’s brief about engaging with spaces throughout the city, I thought it would be better to develop it as a technology installation rather than as a mobile app. Given that I was going to design the system using MaxMSP rather than Pd as I am familiar with it, I thought i’d design it a Max based app which could then be developed and ported for Pd, enabling me to embed it in a mobile device.

The current system spec runs the audio signal from a microphone via a number of variable filters. Using the Max biquad function and the associated help article, I was able to set up a pair of filters which could be set to high-pass, low-pass and band pass. There are also additional gain controls.

So far this was quite good, but a bit simple. I wanted to develop the system further, making it interactive and appealing to users willing to engage with the technology. I remembered the user feedback I’d had from a previous project, The Echo Chamber. The piece used realtime delay feedback looping of a signal from a microphone in a multi-speaker surround sound setup. Many users found the system really engaging, making full use of the distributed props (crisp packets, pens, cans) to make their own rhythms. I thought that this interface could be integrated into the design, allowing the user to manipulate the audio further and use it as a sort of drum machine.

In terms of the interface, I intend to use some form of midi controller utilising pads and dials rather than a traditional keyboard interface. This is due to some concerns that were raised regarding accessibility. When I was talking to potential users, many raised the point that they were not musicians, or at least not classically trained musicians. As a result, they felt that they probably wouldn’t interact with a system that made use of keys. Instead, I looked at the Akai LPD8: a laptop controller with 8 pads and rotary dials. There were a number of features that drew me to this particular interface:

1. It is small and simple, taking up little space whilst limiting the complexity of the interaction. As the piece is designed for a public installation. I felt that a degree of discretion would be good, so as not to attract undesired attention from vandals, whilst the the simplicity of the interface would promote interaction by not being intimidating

2. The interface has a cosmetically appealing feature: when pressed, the pads light up. I thought this would also promote interaction, as there is both an audio and visual response to user interaction.

3. The low cost of the unit. LPD8s are widely aver;able off the shelf at about £50, making them cheap and easy to replace if damaged.

So far I have integrated the controller into into the sketch, using the pads to control the delay feedback effects and the dials to control the filters and the gain. There are still three rotary dials that are unassigned, which could be used to control other effects, such reverb, distortion and a flanger. Such effects are common on modern electronic music equipment, such as the Numark Axis 9 cd player, one of my first DJing equipment purchases many years ago.

In terms of installation locations, I wanted to pick a number of areas in close proximity to each other, with varied soundscapes and a degree of local cultural significance. Having lived near to the Ouseburn valley for many years, I was quite aware of the varied architecture and topography of the area. My interest in doing a project in the area was further increased when I explored the more wooded area of the valley which is currently undergoing development. These wilder areas seem to stand in stark contrast with the geometric shapes that compose the bridges in the area. As a result, sounds in the area are a varied mixture of wildlife activity and the sounds of traffic (road, rail and pedestrian) overhead. A number of footpaths and byways cut through the area, under and over the various bridges in the area. I think this would be good for user interaction, as the units can be ‘discovered’ by potential useres as they walk through the area. This in turn encourages them to engage with the locations, as they stop and look around the area.

I’m quite eager for users to use their own headphones for this particular interface. This is for a number of reasons. Firstly, it reduces the costs and maintenance of the unit, as headphones installed in outdoor public locations would likely be damaged by the elements, potentially mouldy and infested with insects. Secondly, it serves as an additional means of engagement. Lone commuters often carry headphones with them, and a pair of plastic buds is a common sight in people’s ears. Many people, it would appear, would rather listen to their music than that of the area around them. The system encourages them to disengage from their personal media, and actively engage with the sound of the area around them. By offering a degree of control of ambient audio, the user is encouraged to use the system to reinterpret the sounds of the area around them.. Given the widespread use of headphones, we can assume that there are plenty of potential users. Whether someone would be willing to plug their headphones into an open, public audio port is another matter.

MIP: Reflections on Project develepment

Following the submission of the project document and the installation of the demo at OnSite, I have had a number of thoughts about the current technology setup. There are certain elements, particularly during the installation, that have proved bothersome, and I would likely redesign the unit to account for this.

One potential alternative to the current setup would be to use contact mics rather than the stereo condenser mic. If contact microphones were installed on the structures, then they would pick up the vibrations caused by traffic, be it road, rail or pedestrian. I quite like this idea, however there area couple of reasons why i thought it would be better to record all ambient sound rather than using contact mics:

1. By using the current setup, the unit is able to pick up audio from a much wider variety of sources. If I were to use contact mics, then the unit would only pick up vibrations within the structures. Given the broad variety of sounds that come from the Ouseburn area, as a result of the transport network that runs overhead, the local wildlife and farm as well as the flow of the Ouseburn river below. By having multiple locations situated around the valley, the technology draws attention to the varied soundscape of the area

2. Multiple artists have installed contact microphones in structures in order to create audio reinterpretation of structures. Although I find this idea appealing and I am very interested in the subject, I feel that such an installation would be almost too derivative of such artists as Mark Bain and Jodi Rose.

I am still very interested in using contact mics in a piece, possibly using them as the audio source in a similar setup. Given the high flow of traffic over the Tyne Bridge as well as the tower structures at each end, I feel that this location would be a really  interesting site for such an installation. Will Schrimshaw has made really interesting use of this location at the AV Festival in 2010, installing multiple loudspeakers in the tower structures. I think that s imiliar setup would be really interesting. However, rather than using a looped signal to determine the audio, I would rather use a series of contact microphones, which would be used to trigger sub bass frequencies. These frequencies would then be played via the loudspeaker, resonating within the structure creating a feedback loop, much like the use of audio looping in the current setup.

Another issue that I have encountered when working on this project is the issue of microphone to line level signal conversion. In the project book, I refer to the need for the commuter to have a microphone level input, or to have a soundcard that has such capabilities. For the exhibition installation I used a mac mini which unfortunately does not have such an input. Therefore, I had to run the microphone signal via a mixer to increase it to line level. For a permanent installation of the project, i would likely not use a Mac Mini, given the cost of the unit and the lack of microphone level input capacity. Instead, I would use a cheaper unit and install it with either a USB external or internal sound card, or use a reconditioned second hand laptop as the basis for the unit. However, given the client’s requirements as well as the timescale required for the current project, such a design and (re)construction would be unfeasible.

Another issue is that of signal loss and distortion during the installation. For the original setup, I planned on using a 10 metre audio extension cable so that I could install the microphone  in the high in the side of the railway viaduct. However, due to the buzz that using such a long cable created, as well as the proximity of some particularly large looking cables, I decided it would be best to install the microphone much lower and closer to the unit, to minimise the need for extension cables and reduce the loss of signal. Were I to install the piece again, I would probably use wireless micas, allowing for the unit to be installed at a significant distance from the microphone without the need for long, potentially vulnerable cables that could cause distortion or loss of the signal.

I would also like to make some further modifications to the programming of the unit. Firstly, I would like to make the whole unit more ‘tamper-friendly’. This is a result of watching people use the interface without understanding it, sometimes resulting in them changing the interface program. One alternative would be to make each program on the midi interface identical, so that such problems would not occur. Alternatively, the buttons could be covered with a piece of plastic or deactivated so that this sows not happen.

I am also considering including a number of additional features with the unit, possibly integrating drum loops that could be controlled via the top, unassigned dials. This would emphasise the rhythmic nature of the sound, whilst giving the unit additional potential as a musical instrument. These dials could alternatively used to control additional effects, such as reverb, distortion and a flanger.

Finally, I think that the unit could include some more clear instructions. I have tried to include instructions for use, to some beneficial effect: people seemed intrigued with the unit at the launch and many people interacted with it. Part of the difficulty of coming up with instructions is the level of detail to include, wanting to provide sufficient information without patronising or confusing the audience. I decided to include simple instructions in regular font, with more detailed instructions in italics. The idea behind this was to provide simple, accessible information for general users, whilst providing more detailed information for those who desire it. I also included a hint on how to use the pads to create more interesting effects.

MIP: Designing an App

One of the things that has come up when discussing sound walks and headphones  with a few friends and colleagues is the additional sounds people claim that they are able to pick up on when they listen to the sounds of the city via a microphone and headphones as opposed to without such technology. I thought this was quite interesting, given my interest in the brain and its processing of stimuli. It reminded me of some reading I’d done on ‘the cocktail party’ effect, ie, the brain’s ability to interpret various audio signals, filtering out those that might be deemed extraneous to the situation. This can be observed in people’s ability to focus on one conversation over a many in a crowded noisy room, hence its name. Somehow, the sense of disassociation that accompanies listening to the world via a microphone and headphones seems to negate this particular ability.

When looking at urban and suburban audio, the effect is similar. Without this cognitive ability, the urban soundscape would likely be unbearable due to the chaotic noise. In  A New Sense of City Through Hearing and Sound,  Eva Kekou and Matteo Marangoni discuss the chaos of urban sound.

There is a paradox between the fact that cities are highly structured spaces in which almost everything one senses has been processed through a human brain to be orderly, and the fact that interactions therein are far too complex to be controlled.

I wanted to design a technology that somehow worked as an artificial, controllable form of this inherent audio filtering ability. Using the mobile application template, I thought about designing a way of running the phone microphone input via a number of variable audio filters. The audio would then be fed out of the phones 3.5mm audio output to headphones. This technology would not be particularly difficult to design and implement, given Pure Data’s compatability with a number of mobile devices.

This has a couple of applications: firstly, as a more socially beneficial way, this technology could be adapted for use in dealing with various audio/noise issues. In another discussion with a colleague who had suffered a stroke, I was told that since then he finds it incredibly difficult to focus on sound: often conversation can be difficult when there are several sources of sound in the area, as he finds it impossible to concentrate. I thought this application could be quite helpful, enabling him to potentially filter out those undesired noises. The second use, a little more creative, is as a filter for use in an instrument, a sort of ambient subtractive synthesiser. I feel that this could make for an interesting piece of technology that could be used as a way of providing a fresh perspective of the city soundscape.