The preview event itself went really smoothly from my perspective. After one or two issues with health and safety and wiring concerns which meant I had to move one of the sensors I found that everything worked really well. I was confident that I wouldn’t have any serious technical issues due to the testing I had performed and it was nice to have that confirmed.

I also decided to add a bench after discussions with Tim and Pete and after observing how people interacted with the piece once everything was up and running. This turned out to be a good decision as it meant that people spent longer interacting with the work and also moved around a little bit more as they navigated around it. This meant that the sensors triggered more often which meant that the audio experience was more interesting.

The listening space with the lights on.

I also made a fairly last minute decision to incorporate a subwoofer which worked really well as it tied the sonic elements of the piece together really well due to the fact that by its nature subsonic sound is not as directional as other sound. Hence it filled the space.

The feedback that I had was universally positive with a number of people saying that it was their highlight of the show. I think the most positive indicator for me was the fact that many people seemed to spend quite a long time interacting with the piece and were very interested in how I had put it together and how it worked. I was also pleased to hear from people how the projections and the audio tied in well together and as soon as one walked into the space the light from the projections on one’s body made one feel a part of the installation. My initial concept was to create an immersive experience and it was nice to have that confirmed by members of the audience.

The above photos are by Katharine Martin and John Bowers respectively.


Final Project, HSS 8123

In this post I’ll be using actor-network theory (ANT) to analyse the installation, its design process, the audience experience and their role as a part of the piece.

ANT is a theory and methodology for describing how heterogeneous elements evolve, interact and connect with one another. These interactions form actor-networks and those actor-networks are present in anything which one cares to mention. As Callon (1987) notes: “An actor-network is simultaneously an actor whose activity is networking heterogeneous elements and a network that is able to redefine and transform what it is made of”.

What separates ANT from other approaches to social theory is the fact that it gives agency to non-human actors. It recognises that inanimate objects, abstract ideas and everything else can have a role within a network which is as important as that of a human. Those non-human actors can “add their relationships and identities into the collective” (Callon (1987)) and have an effect as profound as that created by a human being.

It is, however, beyond the scope of this essay to go into ANT in any detail beyond this brief overview. Instead we will focus on using ANT as a tool for analysing my project, its design process and the role of the audience. For more thorough introductions to ANT please see Law (1992) and Latour (2005).

In considering the design process of my piece we have to first consider what it was that initiated that idea in the first place. The brief itself was open and so I was left to my own devices to come up with an idea that I wanted to pursue. One of the very earliest ideas that I had at the start of the course was to create something which generated music based on movement. This idea stemmed from a piece I saw in Leeds Sculpture Park. Immediately that gives us a potentially huge network to examine; the artist who built that sculpture, their training, the materials that were used to make it etc. However, I cannot remember anything about that piece other than what it did and the fact that I liked it so what we must also do here is punctualise that system and turn it into an actor in order to simplify its contribution to the system which underpins my work. As Law (1992) writes:

“All phenomena are the effect or the product of heterogeneous networks. But in practice we do not cope with endless network ramification. Indeed, much of the time we are not even in a position to detect network complexities.”

The reason that we punctualise these networks is most often one of simplification. Without those simplifications we would find ourselves going on and on analysing smaller and smaller degrees of agency and no analysis realistically needs to do that. This ties in with the earlier Callon quote about all actors being networks; we accept that there are networks underpinning each actor but also accept that those networks do not always need to be analysed. Essentially, we create a black box, a sociological term describing a system that works as it should. These black boxes are inherently unstable as new evolutions to their inherent networks come into play and those boxes can ‘leak’ as those systems develop or open up entirely if those systems fail. One example would be a working television. A working television is just that but if it fails, to an electrical engineer, it reveals its underlying networks and becomes a complex system of PCBs, lighting panels, power regulators and so on. In other words it becomes a network again. I’ve digressed here slightly but the point that I’m trying to make is that the initial idea that stemmed from my visit to Leeds Sculpture Park can be left as just that. An idea or a black box which doesn’t need to be expanded upon in this essay in order to understand the process I went through in designing and building my work. Many of the other elements of my piece will also be left as black boxes for the purposes of this essay as it is not necessary to understand their inherent networks in order to understand the agency they have exerted on the work as a whole.

My next consideration was how to present such an idea. Back in semester one I put together a piece which used an octophonic speaker setup to give an immersive surround sound experience using pre-recorded audio as its source with ultrasonic sensors providing a degree of interactivity by acting as effects parameter modifiers. As a platform for presenting audio work I felt that this was an effective one and so I decided to use it again for this project.

From there it was a case of putting together an audio system and a visual aesthetic in the form of projections that I felt would work in conjunction with that multi-speaker system. I had the basis for an audio system already designed due to having built a generative music system for Live Electronic Performance. A central tenet of ANT is that actor-networks are constantly in flux. As such, the system which I built for Live Electronic Performance had to evolve in order to fit the needs of this project. It was not enough to simply run that system as it was originally programmed but with added interactivity, the system I was building for this project needed to be vastly more complicated to fit in with the overall theme of the piece, that of externalising the internal and showing its own creation and having a great degree of mysterious and complex interactivity. The agency of that idea was greater than that of the original system and as such that system had to evolve to meet the demands of the idea.

This piece was originally conceived as one where the audience would have the greatest degree of agency. I wanted to create something with audio visual impact and leave it up to the viewer to apply his or her own agency to it. What I found through making the work, however, was that the grandest agency and hence the most important factors in design and construction lay in theoretical ideas and in technology.

The use of multiple Arduinos vastly altered the design process and the process of programming the piece. Since it required the use of a USB hub and associated cabling it influenced the equipment used and the layout of the piece which in turn influenced how the audience moved around and interacted with the piece. The simple addition of a bench in the middle of the listening space was another important factor. Initially I wanted to encourage movement around the listening area but I noticed during testing that people were inclined to stay in one place and watch the projections and so I added the bench. In this situation the audience were exerting agency over an important design element of the piece and hence over their own experience of it. The bench meant that people spent more time observing the work than they may have done otherwise and also meant that people would have to move around it and trigger the sensors more than if they were just stood around, causing a further impact upon the listening experience. The impact this had upon the piece highlights an important idea in ANT, that of delegation; a process of translation whereby the evolution of one set elements of the network (that of the incorporation of the bench) have a profound and inseparable relationship with the other elements (the overall user experience).

“Translation involves creating convergences and homologies by relating things that were previously different” (Callon (1981)).

Overall, the greatest agency and thus the greatest influence over the piece was that of ideas. My research into user interaction with art installations and museum exhibits (see elsewhere on this blog) made me realise that the piece needed a focus point in order to bring everything together. This led to me to look into glitch theory (also elsewhere on this blog) which had a profound impact not only upon the visual aspect of the work but on the audio too. By creating a strong visual glitch aesthetic I realised that I needed to reflect that in the audio and so programmed in a number of ways for the audio to do things that were outside of my direct control. The influence of that set of actors radically altered the entire piece. Not only in its functionality and the user experience but in a wider sense where the work fits in terms of art as a whole. I did not set out to create a work which did anything other than provide a striking experience for the user but found that as actors exerted their agency over the design and build process that it was impossible not to place my piece within an ideological framework. The network that I created was, to a degree, beyond my control as the agency from the elements which composed it brought it into being. It could be argued that I as the artist held the greatest degree of agency over the work and objectively speaking, that may be true. It would be reductive, however, to suggest that that process was one which exists in a vacuum, independent of the often profound influence of other factors. It is in the analysis of those factors that ANT is at its most useful.


Final Project, HSS 8123

This project deals with the installation process itself. Overall it was pretty straightforward, more straightforward in fact than it had been setting it up in my house. There were one or two small issues though which I’ll document here.

Firstly, due to logistical reasons I decided to use Genelec 8050s instead of 8040s. This was partly because there were already 4 x 8050s in the studio so it was less to carry around and partly because they’re larger and have a better bass response. I also decided to add a sub into the mix which I hadn’t initially planned on. I thought the response from the 8050s would be sufficient but I found that it didn’t quite have the same visceral quality as a really powerful sub bass and so I used the very large Genelec sub in the studio. I believe it’s a 7380 SAM.

For the audio output I used a MOTU Ultralite Mk3. There were some issues with this in that the channel numbering was offset by two from where you would expect. After going through it with Clive and John though it didn’t turn out to be too big of a problem. I assigned each oscillator in PD to different speakers with most speakers handling more than one oscillator. In this way I was able to get a really nice spread and sense of space.
I also found that I had to tweak a lot of the levels and overall settings of the PD patch in order to optimise the audio. This was to be expected as the difference between the stereo pair that I programmed it with and 8 large Genelecs in a sound-proofed room is obviously a large one. This was pretty much a process I just did by ear and feel.

In terms of the Arduinos and ultrasonic sensors I found that installation was a lot more simple than I expected. I was anticipating a repeat of the behaviour that I saw in my home studio whereby they were a bit fiddly to set up with lots of unplugging and plugging back in needed in order to get them sending data over serial. Happily, it was simply a case of plugging them in and them working more or less right away. I’m not sure why this was the case but I’m not about to question it. I attached the ultrasonic sensors to the speaker stands using sticky pads and the connecting wires using electrician’s tape.

The projections were done using a Honey Bear media player and an Optoma GT1080 projector. Again, this was a straightforward process because I’d tested it previously. I had to angle the projector to get it to fill the wall in the way that I wanted but I cleared that with Clive and made small chocks out of wood to help prop up its rear end.

Needless to say it was tremendously satisfying to see the whole thing running. I was very pleased with the overall effect and felt that the audio and visuals tied in together really nicely. I’m very much looking forward to showing it to the public!



Final Project, HSS 8123

This post is intended as a follow-up to my previous post about glitch studies. What I’ll be looking to do here is link some of those (and other) ideas with the A/V installation I’ve put together for my project.

The first question I’d like to answer is why I chose glitch as a theme at all. I didn’t approach the piece from the perspective of having a grand concept behind it which I wished to explore; rather, I wanted to create something that was an interesting experience for the audience on an aesthetic level and explore conceptual ideas as they emerged from that process. Fairly early on I decided that I wanted to find a way to externalise the internal and show something of the function of the piece and its creation. During the design process for the visual element I hit upon glitch as a way to do this. Manon and Temkin (2011) note:

“In mainstream popular culture, glitch is deployed not as a marker of artifice, but as a signifier of raw authenticity. It is a digital version of what Garrett Stewart, in describing the painstakingly hand-defaced faux newsreels of Citizen Kane calls authentication by disrepair.”

But what do we mean when we use the term ‘authentication by disrepair’? It is a central tenet of glitch theory that glitches disrupt the transparency through which digital processes create the illusion of an analog, thereby “making the digital material more opaque and visible” (Gross, (2013)). Digital process therefore, present a form of artifice, a set of computational flows which cause us to see only one aspect of the machine and make the user experience as smooth and convenient as possible.

“Current digital rhetorics along with many traditional rhetorical practices cherish glitch-like events like those I mention here because they offer a possibility to render apparent that which is transparent by design.” – Boyle (2015)

Glitches invite us to break down those flows and engage with the man behind the curtain, they are a way of simultaneously experiencing the surface of the work and the hidden depths beneath it. We perceive a glitch as having an air of authenticity because it invites us to strip away that artifice and allow for the fact that technology is process and those processes are often flawed and, as Pope said, to err is human. These ideas of stripping away artifice and acknowledging the flaws within the system are ones which I feel allow the audience a deeper connection to the piece than if it was simply presented as a smoothly functional black box. They allow the audience to engage with the piece on a surface level but also with an understanding (whether overt or otherwise) that beneath the surface lies a deep well of mysterious process.

When I was initially designing the glitch aesthetics for the visual elements of the work I made the decision to use After Effects and employ glitch style filters within that. This, I felt, allowed me to engage with glitch in a way which was straightforward and effective. The problem that I encountered when I read more into the subject was that by using filters I was, to some degree, failing to engage with the authenticity that is one of the most interesting aspects of glitch.

“This form of ‘conservative glitch art’ focuses more on design and end products then on the procedural breaking of flows and politics. There is an obvious critique: to design a glitch means to domesticate it.” – Menkman (2010)

What is of interest to me here is the idea of designed glitches being domesticated glitches. After all, how can one engage with a creative movement without attempting to in some way design a work which engages with it?
Glitches by their nature are errors, wild things which exist on the border of function and dysfunction. Therefore by taming them and bending them to our will we’re removing that which gives them their wildness and thus their authenticity. Glitch effects do this by showing us only the surface effect of the glitch (the After Effects filter for instance), rather than breaking down the processes underlying the simulacrum of analog we engage with on our screens. They show us a mirror image of what a glitch could be, without the three dimensional depth of that which creates the reflection.
So, the natural question to ask here is: How do we engage with glitch authentically? We need to find ways of breaking flows but with a degree of control that allows us to use those glitches creatively. To break our file without rendering it unintelligible to the programs that we use to view it. It’s worth noting that “from the point of view of the file, whose genetic predispositions are rigid and fixed, there is nothing random about glitching” (Manon and Temkin (2011)); using a hex editor to replace ‘0AFR’ with ‘*1DE’ in a bitmap image will always give us the same result when we view the file in our image viewer. This rigidity gives us a degree of control whereby we can corrupt a file whilst maintaining its essential nature. This was something that I saw during the design process. I used Wordpad to corrupt image files (more HERE) and found that I was able to control the amount by which the file was glitched by experimenting with the amount of code I deleted in the text editor. Theoretically speaking, someone with enough knowledge of code would be able to fully control how an image is altered, though this code is so complex that for most glitch is a purely random process. What I was doing here was bring into existence ‘genuine’ glitches which broke the surface flows and showed something of what lies beneath but, due to the fact that they were triggered, were they glitches at all? What I did was relinquish control of the image file, invite the underlying systems to engage with the chaotic and exist at the very edge of failure.

“Glitch art is process art: the artist‘s hand intervening in digital data leaves its mark in the visual essence of the image. The artist‘s process is not exacting, but an invitation of chaos: one triggers a glitch; one does not create a glitch. The limited amount of control the artist maintains is evident in the resulting image.” – (Manon and Temkin (2011)).

In considering ways to incorporate the potential for glitching into my audio patch I first had to consider what it is that the patch was designed to do and then find was to step outside of that. The patch that I put together is essentially a generative music patch that operates using random number generators and data on the positioning of people within a space to create sound. It has a complex system of what are essentially dice rolls which decide when to play a note and which note to play. I’ve also incorporated a number of musical effects such as delay, reverb, phasing and tremolo into it. With the system running as it should it simply creates musical sound without anything other than the clean notes and percussive elements that it was designed to create. In order to make it ‘fail’ I had to find ways to step outside of that and relinquish control of the sound. I did that in three ways; firstly, I used a modifier which ‘froze’ the reverb to create drone effects at random times, secondly, I gave it the ability to randomly play lots and lots of notes very quickly, creating overloads, and thirdly I gave the delay settings the opportunity to max out at random intervals which again creates overloads. These glitches show us that the system which creates the musical sound has a depth to it outside of the musical sound itself, an underlying process which can be engaged with to step outside of the established musical paradigms. As with visual glitches, they are an invitation to engage with chaos, to go beyond the simulacrum of the analog and highlight the ‘noise’ inherent in the code which underlies digital processes.

“Noise can also stand for a (often undesirable, unwanted, other and unordered) disturbance, break or addition within the signal of useful data. Here noise exists within the void opposite to what (already) has a meaning.” – Menkman (2010)



Final Project, HSS 8123

This post deals with the final video file which I’ll be using for the installation. It consists, essentially, of the same thing looped three times making it about 25 minutes long. It uses the code for the PD patch scrolling up the page at various speeds, cut with various images of roads which I datamoshed using Wordpad. I have I frame moshed the sections with the scrolling code to give melting effects when transitioning from the images into the code. The code sections have themselves been moshed using ByeByte. These techniques are all covered in previous posts.

Because it’s important that the video have an intrinsic link to the audio I had to consider the amount of glitches that I wanted to occur and weigh that against aesthetic considerations. In order to do that I listened to the audio patch and made a few rough calculations as to how often glitch effects were triggered and and applied that to the video file as a percentage.

This video is unfortunately somewhat lo res due to the fact that I had problems uploading to YouTube and had to convert it. The actual file I’ll be using is 1400 x 1050 1080p, which is the native resolution of the short throw projector I’ll be using.



Final Project, HSS 8123

This post shows the initial projector test. This pretty much went to plan, the only sticking point being that I had to tilt the projector in order to get it to project onto the wall rather than onto the ceiling. In order to do that I’ll be making wooden chocks later this week to stick underneath it.

I found that while it looked good and filled the room I did not like how some parts of the scrolling code only effected the left hand side of the video and the fact that the code did not fill the entire frame. I tweaked that and re-rendered the video in order to fix that.

I also found that the colours were somewhat washed out due to the fact that I’m projecting onto grey but there’s nothing I can really do about that. Other than that I was quite happy with the way it filled the room and the way the light and colour looked on the bodies of people in front of it.

Here’s a video of that first test. Again, I’ll post a video of the final thing once it’s fully installed.



Final Project, HSS 8123

This post deals with the final testing of the audio patch. Obviously it was quite difficult logistically to have the full system running at all times so how I approached it was to test each system individually and then in combination with each other running in a static state without Arduino input.

A problem that I encountered initially was that I could not find a way to save my slider settings in Pure Data. After a bit of research I found that by right clicking the slider and changing the ‘no init’ option to ‘init’ I was able to get around this. I also found that it was somewhat difficult to get all of the Arduinos functioning together at the same time. I’m not sure what was causing this difficulty but after unplugging the offending Arduinos and plugging them in again a few time I found that eventually they would work. I’ve tested this system several times and while it can be a pain to set up it is always possible to get it working with a couple of minutes of fiddling with the connections. I also had to reassign the port numbers in Windows to make sure that the right Arduinos always showed up in the right ports. Again, I tested this several times by unplugging and replugging the Arduinos and they now always come up on the correct ports. I’ve put numbered and coloured dots on each Arduino so that I can quickly and easily see which port number they’re on and which program they’re running.

Once I had the 7 systems working statically and together the only changes I had to make during final testing was to balance the relative volume levels. This was relatively easy to do. Obviously I still need to figure out which oscillators to send to which speakers in the 8 speaker setup and that’s something I’ll have to do when I get my hands on a MOTU later this week. I’ll also be posting audio of the full system in action once I have it installed and running next week.

In the meantime here is a video of the final system in action.


Final Project, HSS 8123

You can download the patch HERE.

In this post I’ll be describing the Pure Data patch that I’ve put together for the audio component of my A/V installation.

This patch is based around the generative system I put together for my Live Electronic Performance module. I’ve recycled and recombined certain things from that, including the main oscillator and FX components and the click generators. I’ve also integrated a patch written by John which deals with bringing in and reading the data from the Arduinos over serial.

The patch consists of 7 separate systems, each increasing in complexity. It incorporates the potential for most of the oscillators to glitch in one of three ways; delay feedback overload, note generation overload and reverb freezing. The glitches are triggered based on random number generators and select boxes, with the odds ranging from around 1/100 down to around 1/10.

Most of the systems have Arduino inputs which have the potential to effect part of another system, making them interlinked. Arduino 8 was originally going to be a sample player but I decided it wasn’t necessary and instead used it to potentially effect part of one of the 7 systems at random. Everything you hear is created using oscillators in Pure Data with no external sound input such as samples used at all.

Some of the systems also feature click generators which simply produce a very short blast of white noise, resulting in a percussive sound. This is then fed through several delay modules which create variances in tone and rhythm.

System 7 features a melody generation engine which works using random number generators and select nodes. When a note is triggered it has a 50% chance of triggering another oscillator within that system. It then has a 50% chance of triggering another oscillator within that system and so on and so on. I’ve also put time delays in to space the notes out.

The systems themselves work on two basic principles. The first three are using the 0-127 note selector patch that I covered in my previous post and will simply play a note based on the input that they receive based on the distance that the user is from the sensor. The second three are using the minor scale Arduino patch which acts adds a musical scale interval to a pre-determined root note. The last two use the harmonic minor and melodic minor scales to modify a pre-determined root note. This combination of pure random note generation and musical scaling allows for a sense of musicality while maintaining an element of musical randomness.

The system is automated for convenience and can be started at the push of a button.

Here you can see the first two systems. The first, in yellow simply creates a note and modifies it slightly using an envelope generator. The second does the same thing but adds a delay. It also adds a glitch generator (the ‘sel 0 1 2 3 4’ box) which has the potential to max out the delay.

To the right of that in red is the start of system three which is further illustrated below. That adds a reverb section and a white noise oscillator. It also incorporates two potential glitches, delay overload and reverb freezing.

System 4, in green, is shown to the right of system 3. It features the same elements as system 3 but includes a phaser. It also incorporates the potential for a note generation overload glitch and a click generator which is shown below. It is the first of the system which users the note interval modifier principle instead of direct note generation.

The above three images show system 5. This is exactly the same as system 4 but features two oscillators instead of one. This system is set up to produce bass and sub-bass frequencies.

The above 4 images show system 6, in blue. This is the same as the previous two systems but incorporates  three oscillators instead of two. This system is set up to generate chords.

The above 4 images show system 7, in grey. This system is the same as system 6 but with four oscillators and the click generator removed. It also incorporates the random number based melody generator which I described above.

The final two images show the Arduino inputs and the startup automation.