Whilst developing my audio visualizers i became increasingly interested in integrating various additional controls. This arose partly through the programming and experimentation as well as my experience when testing them in a live environment. Whilst experimenting with various drawn shapes, I found that it was possible to produce a variety of different visual representations by merely changing the darn shape from an ellipse to a rectangle, or even using more complex geometric patterns. By integrating a number of booleans into the code, with key coded ‘toggles’ to activate and deactivate them, I was able to switch various visuals on and off, allowing me to use multiple variations in the visuals utilising the same basic structure. This provided an additional degree of interactivity whilst also broadening the capabilities of the software.

In terms of the coding, the use of boolean controlled ‘if loops’ was not particularly problematic. What became increasingly difficult was finding a way of mapping the functions to a control interface. As I was using a laptop for development and performance and given my experience with ASCII coding in MaxMSP, i decided to use the integrated keyboard and touchpad largely for convenience. Although this was initially pretty good, as the number of functions increased, so too did the complexity of the control system. This was not such a problem for myself, but as I intended to display this software for public use and interaction, I, decided I needed to come up with a new, more convenient system for user interaction.

I looked around at a variety of different available control systems, looking for small, convenient, integrated systems. After a bit of browsing, I managed to find a USB calculator with an integrated touchpad. As I had initially mapped many of the functions to the numerical keys on my keyboard and the mouse location, a port wouldn’t be too difficult. However, closer inspection of the interface revealed a small flaw. It was a case of using either the mouse or the keypad, with a button being used to switch between the modes. Although not a massive problem, I did find that in tests users often needed reminding  that they could use both functions.

Given the multimodal emphasis in my research, as well as the suggestions on multimodal interface design, I felt that I needed to provide a greater degree of choice for the user interface design. I looked at other products that could supplement or replace the key/touch pad. One item that caught my attention was a trackball and scroll ring. Although I have not used trackballs or scroll rings much in the past, the haptic/enactive nature of the piece suggested that it could be appropriate as an interface whilst still be similar to the widely used mouse. Although the trackball did not come with keys, I was able to map the scroll ring to scroll through the various visuals by developing on the Processing mouse wheel code by Rick Comanje
http://wiki.processing.org/w/Wheel_mouse

By using the data from the wheel to increase and reduce a variable integer, and assigning a number of loops to use the varying integer data to toggle visuals on and off, I was able to create an interface for the sketch that could be used to scroll throughout he various visual options. Here is the principle in pseudo-code. I decided to have a degree of overlap in the 12 functions as well, but the principle is similar:

You have 4 functions;
Scroll up/left returns -1, down/right  returns +1;
variable int (x)
x = 0 + data from scroll
if x = 1{ boolean toggle function 1 = on
if x = 2{ boolean toggle function 1 = off, 2 = on
if x = 3{ boolean toggle function 2 = off, 3 = on
if x = 4{ boolean toggle function 3 = off, 4 = on

-back to the beginning/end
if x < 4{ x = x – x
x = 0
if x > 0{ x = x + 5
x = 4

Using these two interfaces, I have enabled users to choose how they will interact with the system, particularly whether they interact in a symbolic or enactive manner. As the two systems have similar responses (the touchpad offering more options for signal visualisation) the visualisation system can be operated almost fully (save for the initial signal input) from either of these two interfaces. With some modification to the keypad itself (covering the surface with a printed acetate sheet representing the different functions), my intention is to make the system even more intuitive. With additional control diagrams built into the installation, I believe that the control interface I have designed should be suitably interactive.

After some initial testing with the BioWave headset, it became apparent that I should be using a bluetooth serial connection rather than connecting the arduino to the computer via the USB port. The BioWave instructions suggest that I should use the sensor exclusively with their adapter. However, I thought that the price of this adapter was too high to be viable for use within this project. I also thought that since the software is based in Max, I would likely be able to develop a similar system in Max and Arduino. As the system is bluetooth based (to avoid distortion of the signal resulting from the mains power supply as well as the potential risk of user electrocution) I decided to build a similar, battery powered bluetooth system based on the Arduino BT. I had based much of my initial headset development via a USB serial cable, developing on the Arduino2Max code and patch:
http://www.arduino.cc/playground/Interfacing/MaxMSP#arduino2max
To develop the bluetooth system, it was merely a case of uploading the sketch to a bluetooth Arduino and aping the device with my computer. Data from the Biowave could then be manipulated in either Max MSP or Processing.

However, further working with the project I found that it was difficult to analyse the raw data from the sensor. Much of my work with data visualisation has been based in FFT analysis of an audio signal. Due to time constraints and the relative ease and convenience, I used the data from the sensor to modulate an audio signal in Max MSP. As I was interested in working with the raw data, I decided that the signal should be used to modulate the frequency of a bank of oscillators, similar to the wavetable synth used in the Binauaral/Monaural Beat Generator. The audio from this biosignal controlled synth could then be routed internally using soundflower, and visualised in Processing like any other live audio signal. I decided that the user should not be able to hear the signal, largely due to the irritating noise it makes which could be seen as off-putting. I felt that this would be out of place given the immersive and dissociating nature of the piece.

In constructing the headset, the sensor was mounted into the frame of a pair of modified toy night-vision goggles. The electronic components were harvested from the toy for future use, before removing sections of the frame to enlarge the headset for adult use. The frame was kept in place (save from some trimming) as well as the elastic fitting strap. After giving the goggle frames a coat of matt spray paint, the sensor was glued in place with the electrodes fitting the brow of the wearer. The board, battery pack and excess wires were placed in a zip-up wallet with a reasonably rigid frame (bought from Pound Land), which was then glued to the side of the headset with a hot glue gun. This allowed easy access tot he board to turn the battery pack on and off and to replace dead batteries.The resultant headpiece makes for quite a convenient biofeedback interface for use within the piece. The bluetooth connection allows the headset to be used without wirelessly, whilst the whole thing is sufficiently small to be used alongside headphones. The whole unit (consisting of the frame, sensor, board and battery pack) is pretty light and easy to use.

One of the first things I really got into in Processing was writing audio visualizers. This arose from an initial experiment into audio waveform rendering, which led to further experimentation with realtime audio visualisation using the techniques discussed by Antony Mattox. At the same time I developed on the techniques explored by Dan Shiffman in Learning Processing. Shiffman’s book explores many areas of text based programming for visual design. However, it doesn’t deal with audio visualisation using FFTs, the topic that I wished to focus on interest multimodal feedback systems. During my research, I have attempted to apply the Mattox code (which, as he suggests, can be used as the foundation of many systems for audio visualisation) to the Shiffman examples, working through the book, using selected exercises from the book to build a variety of audio reactive visual programs.

There is a clear artistic and commercial interest in the use of audio-responsive visual software, evident in my ability to get a number of gigs as a VJ (despite how much I loathe the term). These initially started out at house parties and exhibition after parties, followed by club nights and more commercial jobs. Using these gigs, I attempted to find out what audiences and performers found appealing in such software as well as what they expected from them. I have attempted to apply the findings from my research to my final chosen method of data visualisation. These expectations and appealing elements are discussed below.

The audio data should be the dominant variable in the sketch. Even if other controllable variables are to be used, audio data should be applied to them in some form. The audio spectrum data, which can be retrieved via fft.spectrum[], can easily be applied to any variable: the size of geometric shapes, the increment by which an object moves, or simply the colour data of the drawn image. The spectrum can also be used to affect image tinting, contrast, brightness and gamma levels through the image.adjust library. This idea has been central to my design and development process, attempting to apply this spectrum data to as many variables as possible within the sketch, making the final program as audio responsive as possible. Although much of my visual software includes other control interfaces (mouse, keyboard), these functions are included to offer a degree of user interaction and control. This originates form my own experience using the software in a live capacity, as I wanted to be able to switch between various modes without closing the program. However, these controls are largely used for such a process, with the audio data affecting most elements of the sketch.

The visualisation should be highly audio responsive and operate smoothly. The system of audio visualisation, when being used in realtime, has to run smoothly in realtime. Although a bit of lag is maybe not so noticeable to the general public, it is very noticeable to musicians and performers. For use alongside live audio, the software must respond quickly for optimum effect. As a result, various labour intensive elements of the sketch should be reduced, such as the number of audio frequency bands that are to be visualised. Alternatively, the software can be run on an increasingly powerful computer with plenty of RAM to allow more frequency bands to be visualised.

The system should be visually interesting, but still readable. Much of my experience working with this particular software has shown such linear systems as the colour organ to still be popular for audio visualisation. Those systems based on Shiffman’s nested push and pop example have proved popular and highly visually engaging. However, any attempt to analyse the audio data in such a manner is difficult. The audio visualisation is based more in movement than in geometric shapes corresponding to different frequency bands. Similarly, those visuals based in the adjustment of a pre-rendered image are difficult read as the audio is not so much visualised as the audio data used to adjust another visual image. . In contrast the familiarity of the colour organ model has proved easier to read, amongst general audiences and musicians alike.

In designing the final visualisation method, I tried to use these findings as well as further user & audience feedback to develop the system. The audio data is applied to a broad number of variables that affect colour, shape size and layout. In order for the visual software to run smoothly and to filter out those frequencies where there is little data for visualisation, I have significantly reduced the number of frequency bands. In terms of the general visual form, I have taken the linear method of audio visualisation modelled on the colour organ from Mattox’s audio visualisation code. I have then applied it to a variety of different methods detailed in Shiffman’s work, such as the wave, and experimented with these forms. The result is a relatively familiar system for audio visualisation that offers multiple ways of visualising the data within this form. As a result, the user is offered plenty of choice in how they wish to visualise the data.

The final visualisation can be seen as highly derivative of the colour organ model. However, this accessibility proved highly popular with test audiences and users, whilst the varieties of data presentation in the linear form allowed for a wide variety of different visual objects. The software went through multiple rewrites and updates before it reached its final form, so there is some cleaning and labelling to be done. I chose to use a variety of organic and inorganic looking functions, based on waves, double helixes and the colour organ. Using the booleans and mapped keys, users can interact with the software, turning various shapes on and off as well affecting the plotting of the data and the colour.

| September 30th, 2011

In designing this piece, I looked at a variety of different audio pieces that would be effective when played via the chair. Having built the chair quite some time ago, I was able to integrate it with my own audio playback equipment, allowing me to test the system alongside various pieces of audio-visual media, particularly music, films and computer games. Although I found a number of interesting audio pieces that I could use, there were also a number of issues that arose through this process. The main issue was the highly subjective nature of audio and music taste. The deep bass of dub reggae was highly effective, producing a variety of interesting responses from the shaker. However, were the potential user not a fan of dub reggae, then their enjoyment of the piece would likely be decreased. An alternative setup could include a 3.5mm jack, feeding an audio signal to the chair from the user’s own media player. However, there were also issues with such an approach: being unable to control the quality of the audio signal being played on the system, I would not be able to guarantee that the response would be effective.

The idea of using another composer or producer’s work was also somewhat fraught with problems. Alongside a selection of modern electronic and dance pieces (which utilise large amounts of bass), I looked at using a number of classical compositions. I had looked at using a number of pieces by Beethoven, whose limited hearing was of interest to me: I had wanted to see whether Beethoven’s later compositions (particularly those composed when his deafness was at its height) made particular use of bass tones, as it is likely that he would retain an ability to perceive these low frequencies. The use of these pieces, particularly from Beethoven’s 5th and 9th symphony, would also arguably make the piece more accessible due to the familiarity of the pieces and their motifs. However, I felt that using the system to play back a piece of music written by someone else was somewhat inappropriate. Although the system is more than capable of reproducing such audio pieces, a number of test users suggested that I should compose and produce my own music for use on the chair. I was eager to do so, as such an approach appealed to my interest in audio composition ad production. However, due to the time constraints in the production schedule I felt I should postpone such a production, and not use it for this particular installation.

As interactivity and human-computer communication are central to the piece, I felt that I should in some way enable the user to decide on the signal that is to be played via the system, or at least be able to manipulate that signal. Whilst discussing the project with Adam Parkinson, Adam suggested that I should construct some form of synthesiser for use in conjunction with the system, allowing users to generate and modulate a tone for multimodal playback. I had initially thought of using a variety of low frequency and infrasonic samples in my composition, obtained from various field recordings including engines and large animals, before detuning these samples by various octaves in order to lower the frequency of the audio signal. However, for the purposes of this installation I have chosen to build a simple digital subtractive synthesiser. I chose to use subtractive synthesis based on waveform generation and modulation following discussion of the project with music producer and DJ Mark Lowry, who first taught me music production many years ago. It was suggested that synthesis would provide a ‘purer’ form of sonic energy, particularly when working with infrasonic audio, maximising the clarity of signal feedback.  The synth features include a number of oscillators for the production of a variety of different waveforms (cosine, rectangle, triangle and sawtooth) as well as a filter with adjustable cut-off frequency and resonance. The synth also includes a binaural beat function, with which the user is able to create a binaural/monaural beat effect as well as modulate the frequency of the beating.

This particular setup was chosen for a number of reasons other than those already mentioned. Firstly, the user controlled frequency and waveform amplitude enables the user to effectively use the dials to ‘sculpt’ the waveform, allowing the user to modulate the signal that we will eventually hear, see and feel. Secondly, research into AVE (audio-visual entrainment) has suggested that exposure to binaural beats can have an entraining effect on the user’s biosignals. Given the use of realtime biofeedback monitoring equipment in the piece, these apparent effects of audio/visual/tactile signal should be visible via the biosignal visualizer, allowing for monitoring of potential trimodal entrainment. Thirdly, the pulsing sensation produced using the system was apparently familiar to many test users, resembling the sound of an engine turning over as well as the distinctive LFO produced bass ‘wobble’ that is frequent in a lot of modern electronic music (particularly common in dustup and drum and bass, pieces from such genre being used in many of the system tests). Finally, the ability of the user to effectively sculpt their own waveform for playback bore some resemblance to the Penfield Mood Organ form Phillip K Dick’s Do Androids Dream of Electric Sheep, after which the piece is named. Using the Mood Organ, characters from Dick’s story are able to dial a particular frequency wave which elicits a particular emotional response. Given my interest in measuring user biosignal response to a particular signal, such a system for signal production and modulation seemed ideal.

In constructing the user interface for the waveform generator, I decided to use a simple, seemingly intuitive interface in the form of the Akai LPD8 midi controller. My experience using this particular interface in the AAMP project suggested that the simplicity of the interface, utilising 8 pads and rotary dials, was appealing to users, encouraging interaction with the system using it as an interface. The 8 dials are used to set the carrier frequencies (via the left and right audio output channels), the frequency of the binaural/monaural beating (determined by adjusting the difference between the two waveform frequencies), the cutoff frequency and resonance of the filter, as well as the amplitude of the 4 different waveforms. The pads were set to also adjust the frequency of the wave, but in a manner more familiar to traditional audio reproduction devices and instruments. The 8 pads, when pressed sequentially, produce an octave scale starting and ending at C. This is to highlight that the system is essentially modelled on a musical instrument, much like Dick’s Mood Organ. The instrument/device also includes a graphical interface (GUI), providing visual information on the waveform using a number of oscilloscopes and number boxes, as well as a visualisation of the audio signal in a similar manner to that used to visualise the biosignal. Using such a system of monitoring, the aim is to provide users with two similar graphic visualisations, allowing the user to compare their own biosignal visualisation with the visualisation of the audio signal. Using such a system, the aim is to enable participants to observe any entrainment that should occur with relative ease as they are able to refer to both visualisations and compare.

To fit the shaker to the chair, I went about designing a fixing bracket. At first I had thought about modifying the original bracket that came supplied with the chair. However, fitting this particular bracket proved problematic. Firstly, sections of it would have to be removed, which would require significant cutting of the bracket at rather uncomfortable angles. Secondly, to fit the bracket to the chair I would have to screw or stud the bracket into the wooden chair, reducing the structural integrity of the chair. As this fitting would later support the vibrating shaker, this integrity would be further reduced as time went by. Eventually the bracket would likely shake free from the chair, and would not be able to be refitted due to the damage to the wood.

I spent some time discussing the project with (Steve Rowland?) at the Fine Art metal workshop, and with his help designed an alternative system with which to fit the shaker to the chair. This involved two identically sized, square steel plates with corresponding holes. One of these plates was then drilled with further holes corresponding to the screw fittings on the Buttkicker. These plates then fit together using 4 large bolts, sandwiching the centre of the cone. The shaker was then fitted to the outer plate via the previously mentioned holes. The system effectively creates an audio-responsive vibrating pad at the cone’s apex. When the cone is suspended from the ground via the chair’s base, the shaker is then free to vibrate, with these vibrations being conducted via the chair’s cushion to the user. The system is quite effective for listening to music, effectively serving as a silent subwoofer. The unit makes very little noise, save for the resonance of the bamboo: however, the shaking can be felt significantly through the floor when not sat in the chair. This can have the effect of conducting the audio signal to whatever should be in close proximity, which can result in some interesting audio effects.

Low frequency tones seem to be best conducted at the apex, with the sensations moving outwards from the apex relative to the pitch of the tone.  Should the user extend their hands and hold on to the outer rings, or lean back bringing their neck or skull into contact with the outer ring, then the sensation is even more pronounced, with many test users claiming they could feel the sensations more clearly and in some cases throughout their entire body. This was an interesting effect, given my interest in bone conduction for audio reproduction. The chair could be seen as working similar to the Baha implant, a ear implant for the deaf that operates on a similar principle. Using the implant, audio signals are conducted directly via the implant into the bone of the skull, effectively bypassing the traditional system for audio perception. The `cochlea implant seems to function in a similar manner, with an electronic audio signal being conducted via the tissue in the ear rather than the traditional audio transduction process in human perception. These ideas of mechanical human augmentation and methods of effectively cross-wiring, hacking and short circuiting traditional methods of sensory perception were very interesting given my interest in transhumanism.

The chair can be seen as a caricature of the Baha implant, subwoofer or loudspeaker, and with its complementary lighting system based on a TiP31 transistor, provides two sensory modes of feedback: tactile and visual. An electrical audio signal is transduced into kinetic energy via the power amp and bass shaker, whilst also modulating an electrical signal via the transistor affecting the power sent to the LEDs, As a result, the lights can be seen to pulse relative to the beating of the binaural/monaural wave. I had intended for the signal to respond in such a manner, given my reading into flicker stimulation and early brainwave entrainment experiments. This direct relationship between sight, hearing (via the headphones or resonance of the parts of the chair) and touch creates a highly immersive environment, intended to dissociate the user from other stimuli and encourage them to concentrate on the signal played via the system.

MIP: System Development

| May 21st, 2011

For this project, I wanted to develop a system that was a development of the idea i’d had for an audio filter for mobile use. However, given the project’s brief about engaging with spaces throughout the city, I thought it would be better to develop it as a technology installation rather than as a mobile app. Given that I was going to design the system using MaxMSP rather than Pd as I am familiar with it, I thought i’d design it a Max based app which could then be developed and ported for Pd, enabling me to embed it in a mobile device.

The current system spec runs the audio signal from a microphone via a number of variable filters. Using the Max biquad function and the associated help article, I was able to set up a pair of filters which could be set to high-pass, low-pass and band pass. There are also additional gain controls.

So far this was quite good, but a bit simple. I wanted to develop the system further, making it interactive and appealing to users willing to engage with the technology. I remembered the user feedback I’d had from a previous project, The Echo Chamber. The piece used realtime delay feedback looping of a signal from a microphone in a multi-speaker surround sound setup. Many users found the system really engaging, making full use of the distributed props (crisp packets, pens, cans) to make their own rhythms. I thought that this interface could be integrated into the design, allowing the user to manipulate the audio further and use it as a sort of drum machine.

In terms of the interface, I intend to use some form of midi controller utilising pads and dials rather than a traditional keyboard interface. This is due to some concerns that were raised regarding accessibility. When I was talking to potential users, many raised the point that they were not musicians, or at least not classically trained musicians. As a result, they felt that they probably wouldn’t interact with a system that made use of keys. Instead, I looked at the Akai LPD8: a laptop controller with 8 pads and rotary dials. There were a number of features that drew me to this particular interface:

1. It is small and simple, taking up little space whilst limiting the complexity of the interaction. As the piece is designed for a public installation. I felt that a degree of discretion would be good, so as not to attract undesired attention from vandals, whilst the the simplicity of the interface would promote interaction by not being intimidating

2. The interface has a cosmetically appealing feature: when pressed, the pads light up. I thought this would also promote interaction, as there is both an audio and visual response to user interaction.

3. The low cost of the unit. LPD8s are widely aver;able off the shelf at about £50, making them cheap and easy to replace if damaged.

So far I have integrated the controller into into the sketch, using the pads to control the delay feedback effects and the dials to control the filters and the gain. There are still three rotary dials that are unassigned, which could be used to control other effects, such reverb, distortion and a flanger. Such effects are common on modern electronic music equipment, such as the Numark Axis 9 cd player, one of my first DJing equipment purchases many years ago.

In terms of installation locations, I wanted to pick a number of areas in close proximity to each other, with varied soundscapes and a degree of local cultural significance. Having lived near to the Ouseburn valley for many years, I was quite aware of the varied architecture and topography of the area. My interest in doing a project in the area was further increased when I explored the more wooded area of the valley which is currently undergoing development. These wilder areas seem to stand in stark contrast with the geometric shapes that compose the bridges in the area. As a result, sounds in the area are a varied mixture of wildlife activity and the sounds of traffic (road, rail and pedestrian) overhead. A number of footpaths and byways cut through the area, under and over the various bridges in the area. I think this would be good for user interaction, as the units can be ‘discovered’ by potential useres as they walk through the area. This in turn encourages them to engage with the locations, as they stop and look around the area.

I’m quite eager for users to use their own headphones for this particular interface. This is for a number of reasons. Firstly, it reduces the costs and maintenance of the unit, as headphones installed in outdoor public locations would likely be damaged by the elements, potentially mouldy and infested with insects. Secondly, it serves as an additional means of engagement. Lone commuters often carry headphones with them, and a pair of plastic buds is a common sight in people’s ears. Many people, it would appear, would rather listen to their music than that of the area around them. The system encourages them to disengage from their personal media, and actively engage with the sound of the area around them. By offering a degree of control of ambient audio, the user is encouraged to use the system to reinterpret the sounds of the area around them.. Given the widespread use of headphones, we can assume that there are plenty of potential users. Whether someone would be willing to plug their headphones into an open, public audio port is another matter.

Following the submission of the project document and the installation of the demo at OnSite, I have had a number of thoughts about the current technology setup. There are certain elements, particularly during the installation, that have proved bothersome, and I would likely redesign the unit to account for this.

One potential alternative to the current setup would be to use contact mics rather than the stereo condenser mic. If contact microphones were installed on the structures, then they would pick up the vibrations caused by traffic, be it road, rail or pedestrian. I quite like this idea, however there area couple of reasons why i thought it would be better to record all ambient sound rather than using contact mics:

1. By using the current setup, the unit is able to pick up audio from a much wider variety of sources. If I were to use contact mics, then the unit would only pick up vibrations within the structures. Given the broad variety of sounds that come from the Ouseburn area, as a result of the transport network that runs overhead, the local wildlife and farm as well as the flow of the Ouseburn river below. By having multiple locations situated around the valley, the technology draws attention to the varied soundscape of the area

2. Multiple artists have installed contact microphones in structures in order to create audio reinterpretation of structures. Although I find this idea appealing and I am very interested in the subject, I feel that such an installation would be almost too derivative of such artists as Mark Bain and Jodi Rose.

I am still very interested in using contact mics in a piece, possibly using them as the audio source in a similar setup. Given the high flow of traffic over the Tyne Bridge as well as the tower structures at each end, I feel that this location would be a really  interesting site for such an installation. Will Schrimshaw has made really interesting use of this location at the AV Festival in 2010, installing multiple loudspeakers in the tower structures. I think that s imiliar setup would be really interesting. However, rather than using a looped signal to determine the audio, I would rather use a series of contact microphones, which would be used to trigger sub bass frequencies. These frequencies would then be played via the loudspeaker, resonating within the structure creating a feedback loop, much like the use of audio looping in the current setup.

Another issue that I have encountered when working on this project is the issue of microphone to line level signal conversion. In the project book, I refer to the need for the commuter to have a microphone level input, or to have a soundcard that has such capabilities. For the exhibition installation I used a mac mini which unfortunately does not have such an input. Therefore, I had to run the microphone signal via a mixer to increase it to line level. For a permanent installation of the project, i would likely not use a Mac Mini, given the cost of the unit and the lack of microphone level input capacity. Instead, I would use a cheaper unit and install it with either a USB external or internal sound card, or use a reconditioned second hand laptop as the basis for the unit. However, given the client’s requirements as well as the timescale required for the current project, such a design and (re)construction would be unfeasible.

Another issue is that of signal loss and distortion during the installation. For the original setup, I planned on using a 10 metre audio extension cable so that I could install the microphone  in the high in the side of the railway viaduct. However, due to the buzz that using such a long cable created, as well as the proximity of some particularly large looking cables, I decided it would be best to install the microphone much lower and closer to the unit, to minimise the need for extension cables and reduce the loss of signal. Were I to install the piece again, I would probably use wireless micas, allowing for the unit to be installed at a significant distance from the microphone without the need for long, potentially vulnerable cables that could cause distortion or loss of the signal.

I would also like to make some further modifications to the programming of the unit. Firstly, I would like to make the whole unit more ‘tamper-friendly’. This is a result of watching people use the interface without understanding it, sometimes resulting in them changing the interface program. One alternative would be to make each program on the midi interface identical, so that such problems would not occur. Alternatively, the buttons could be covered with a piece of plastic or deactivated so that this sows not happen.

I am also considering including a number of additional features with the unit, possibly integrating drum loops that could be controlled via the top, unassigned dials. This would emphasise the rhythmic nature of the sound, whilst giving the unit additional potential as a musical instrument. These dials could alternatively used to control additional effects, such as reverb, distortion and a flanger.

Finally, I think that the unit could include some more clear instructions. I have tried to include instructions for use, to some beneficial effect: people seemed intrigued with the unit at the launch and many people interacted with it. Part of the difficulty of coming up with instructions is the level of detail to include, wanting to provide sufficient information without patronising or confusing the audience. I decided to include simple instructions in regular font, with more detailed instructions in italics. The idea behind this was to provide simple, accessible information for general users, whilst providing more detailed information for those who desire it. I also included a hint on how to use the pads to create more interesting effects.

MIP: Designing an App

| May 15th, 2011

One of the things that has come up when discussing sound walks and headphones  with a few friends and colleagues is the additional sounds people claim that they are able to pick up on when they listen to the sounds of the city via a microphone and headphones as opposed to without such technology. I thought this was quite interesting, given my interest in the brain and its processing of stimuli. It reminded me of some reading I’d done on ‘the cocktail party’ effect, ie, the brain’s ability to interpret various audio signals, filtering out those that might be deemed extraneous to the situation. This can be observed in people’s ability to focus on one conversation over a many in a crowded noisy room, hence its name. Somehow, the sense of disassociation that accompanies listening to the world via a microphone and headphones seems to negate this particular ability.

When looking at urban and suburban audio, the effect is similar. Without this cognitive ability, the urban soundscape would likely be unbearable due to the chaotic noise. In  A New Sense of City Through Hearing and Sound,  Eva Kekou and Matteo Marangoni discuss the chaos of urban sound.

There is a paradox between the fact that cities are highly structured spaces in which almost everything one senses has been processed through a human brain to be orderly, and the fact that interactions therein are far too complex to be controlled.

I wanted to design a technology that somehow worked as an artificial, controllable form of this inherent audio filtering ability. Using the mobile application template, I thought about designing a way of running the phone microphone input via a number of variable audio filters. The audio would then be fed out of the phones 3.5mm audio output to headphones. This technology would not be particularly difficult to design and implement, given Pure Data’s compatability with a number of mobile devices.

This has a couple of applications: firstly, as a more socially beneficial way, this technology could be adapted for use in dealing with various audio/noise issues. In another discussion with a colleague who had suffered a stroke, I was told that since then he finds it incredibly difficult to focus on sound: often conversation can be difficult when there are several sources of sound in the area, as he finds it impossible to concentrate. I thought this application could be quite helpful, enabling him to potentially filter out those undesired noises. The second use, a little more creative, is as a filter for use in an instrument, a sort of ambient subtractive synthesiser. I feel that this could make for an interesting piece of technology that could be used as a way of providing a fresh perspective of the city soundscape.

MIP: Work By Another Artist

| May 15th, 2011

I was really interested in the Kittler reading, particularly the section referring to transport networks and infrastructure as systems for the flow of information. Given Newcastle’s metropolitan design and busy traffic, the city can almost be seem as being at the centre of a massive communication network, This can be seen in the extreme business of the Tyne Bridge during the rush hour commute. At times it has taken me nearly an hour to get across that bridge in a car.

I wanted to bring the work of Jodi Rose and her Singing Bridges to the class. The project outline introduces the concept:

“‘Singing bridges’ is a sonic sculpture, playing the cables of stay-cabled and suspension bridges as musical instruments. To create this work I will amplify and record the sound of bridge cables around the world. Listening in to the secret voice of bridges as the inaudible vibrations in the cables are translated into sound.'”

I thought this idea was really interesting, as it was a new way of using the noise of some of the most architecturally impressive structures humanity has ever designed. Bridges serve as the linking points in Kittler’s urban information network, the connectors in the circuit. A byproduct of this is the extreme levels of noise. This applies to bridges, as well as many other points within the transport network. I have had the fortune in my time in Newcastle to live in the ground floor flat, next to a Number 1 route bus stop, and some of the noises when the bus is idling at the stop are incredible. The walls of the flat filter out much of the higher end engine ‘chug’, however, there is a powerful, strangely warm rumble that can be heard and felt to some degree in the walls.

I thought Jodi Rose’s project was really interesting as it is a form of reinterpretation of that noise byproduct of metropolitan living. I also thought the idea could be used in Newcastle, given the city’s 2007 award as the country’s noisiest city.

Looking back over the design of this piece, I have noticed certain recurring themes in my work. The more and more I work with the visualisation of audio signals, the more I become interested in the notion of signal modulation and aesthetics. My work with Tip31c transistors is such an example of aesthetic signal modulation, as the audio data fed into the circuit via the transistor effectively modulates the signal in the current, affecting the brightness of the LED. Similarly, my Processing based audio visualisers work in a similar manner: a shape is drawn, using the audio variable to control size, translation and colour. This variable is effectively modulated by variations in the signal being fed to the FFT. The FFT (or fast Fourier transform), is also of interest as it also fits with this notion of signal aesthetics. Using the FFT, a signal is broken down into its component oscillations. What I find most interesting about this application of audio signals to visual media is the sense of synchronicity. As both the audio and the visual are run off the same signal, they have the same source data. The data is just plotted in a different manner.

This whole concept of brainwave entrainment and its potential for therapeutic use is somewhat reminiscent of the Penfield Mood Organ from Phillip K Dick’s Do Android’s Dream of Electric Sheep? The device, which sadly does not appear in Ridley Scott’s adaptation, is a home appliance for the regulation of emotion states. The appliance is seemingly based around the Penfield Wave Transmitter, a similar tool that can also be used for the projection of emotional states. This concept is really interesting, given the nature of my research. Wahbeh, Calabrese and Zwickey’s 2007 study on the psychological and physiological effects of binaural beats suggests that exposure to delta (0-4Hz) binaural beat frequencies can have positive psychological and physiological effects. If this is a result of brainwave entrainment, then it fits with the concept of the Penfield Wave Generator: through the use of specific frequency signals, one can affect neurological activity, and effectively emotional states.

I find this reference to Penfield of further interest due to some of the reading I have been doing over the last year. About a year ago, I read Oliver Sacks’ The Man Who Mistook  His Wife for a Hat. One chapter, entitled Reminiscence, refers to a number of cases where patients would hear music when there appeared to be none. In the chapter, Sacks refers to Wilder Penfield, the Canadian neurologist who invented the Montreal Procedure. This procedure, in which the patient is conscious under general anaesthetic, allowed Penfield to map the function of the various areas of the brain through stimulation and observation. A similar procedure was performed on banjo player Eddie Adcock during an operation to treat a tremor in his hand. Adcock played his banjo whilst surgeons continued with the deep brain stimulation, allowing them to test the effect of the neurological implant on Adcock’s motor skills and the effectiveness of the treatment.

During Penfield’s experiments, he was able to identify the source of the elaborate mental states felt by many epileptics during the onset of a seizure. Looking through Penfield’s notes, there are multiple cases where Penfield is able to elicit this hallucinatory state through the precise stimulation of areas of the temporal lobe. This stimulus, as Sacks puts it, “would instantly call forth intensely vivid hallucinations of tunes, people, scenes, which would be experienced, lived. as compellingly real, in spite of the prosaic atmosphere of the operating room, and could be described to those present in fascinating detail.” These hallucinations, on further inspection, would appear rooted in the experiences and memories of the patients. Many recall hearing songs they associated with their youth, often not recognising the tune but being able to sing along. Furthermore, when the stimulus was less precise, Penfield observed an apparent blending of memories. In one case, a young boy, during stimulus the patient would observe a bank robbery in his hallucination, without ever having witnessed such an event. Additionally, there was no sound in the hallucination: it was purely visual. After the stimulus was less precise, with the signal spreading to other areas of the lobe, the patient appeared to have a hallucination in which further elements of his own memories were merged with the bank robbery hallucination. It transpires that the boy was a comic book reader, who had read a comic which involved a bank robbery. This, it would appear, explained the absence of audio from his initial hallucination, as the boy would have no contextual understanding of the sounds associated with a bank robbery. This also seems to suggest that memory and perception are a series of sensory snap shots, which are stored in the brain and can be accessed via stimulation of certain areas. This is somewhat fitting with the analogy of the human computer, suggesting that memories are stored similar to files on a hard drive.

Penfield’s work is of further interest when exploring this analogy of the human machine. The Montreal Procedure, from what I am able to interpret with my very limited experience with neurosurgery, involves the probing of various neurological tissues and the observation of the patient’s response. Penfield appears to use this technique to identify areas of scarred and damaged tissue, removing those areas that would not have a negative effect on the patient. In the analogy, these areas of damaged tissue seem to correspond to the damaged connectors in a circuit. Just as a faulty connection might result in a short circuit or electrical discharge in the circuit, these damaged tissues can cause an electrical discharge within the brain or a seizure.

I’m interested in exploring these ideas further. I have been experimenting with photo editing, coming up with a few prints that relate to this idea. I am currently referring to this one as the electronic homunculus.

I’ve also been experimenting with the Processing Image Adjust library, using audio signals to manipulate the contrast and gamma of an image. For the images, I have been using medical photographs, such as x rays, MRI and retinal photography. These sketches seem to work really well with glitchy, downtempo audio with interesting dynamics, such as Squarepusher’s Conc 2 Symmetric from Do You Know Squarepusher? The ambience of the music also seems fitting the atmosphere of the images. I’ll post a link when I record a demo of the sketch.