HSS 8123 – FINAL PROJECT – POST 8 – THE GLITCH STUDIES MANIFESTO

Final Project, HSS 8123

This post primarily deals with issues raised by and my own responses to The Glitch Studies Manifesto by Rosa Menkman which can be found HERE.

As I’ve mentioned previously one of the emergent themes of this work has been that of glitch. While glitch studies and glitch art is a large area I’ve decided to focus for the time being on one of the more influential works in the field, The Glitch Studies Manifesto by Rosa Menkman. This post isn’t intended as any sort of rigorous academic essay or deep enquiry into the field, its intention is merely to analyse and respond to some of the themes which emerge in this paper and, for me personally, to help me clarify and crystalise my thoughts.

Whenever we use technology we are unknowingly engaging with a deep system of processes which construct what we perceive as the user experience. The image that we see when we open a .jpg file belies the complex code which is the catalyst which allows us to view that image. Glitch studies and glitch art in particular focuses on the boundary between that user experience and the underlying processes, allowing us a glimpse at the ‘man behind the curtain’ and emphasising the creative potential of those glimpses. In essence, glitches make the transparent apparent by bending and breaking the processes in question to create a kind of destructive generativity which invites the creation of new forms from that which has been destroyed.

“Even though the constant search for complete transparency brings newer, ‘better’ media, every one of these new and improved techniques will always have their own fingerprints of imperfection. While most people experience these fingerprints as negative (and sometimes even as accidents) I emphasize the positive consequences of these imperfections by showing the new opportunities they facilitate.” – The Glitch Studies Manifesto

What Menkman is primarily referring to here is the development of hardware technology. Essentially saying that the search for perfection is Quixotic because there are always imperfections inherent. Those imperfections, however should be seen as a positive because of the potential for creativity that they carry with them. For me, this chimes with a lot of my own work within music and the lo-fi music aesthetic in general. While it’s nice to have the option of a noiseless channel it is not always desirable because with hum and hiss comes an engagement with a certain set of creative ideas. I think there’s an honesty to noise. An acceptance that nothing can be perfect and it would be foolish to try and make it so.
In The Glitch Studies Manifesto rewrite for Video Vortex 2 (HERE), Menkman goes on to clarify this position by describing ways in which different types of video technology have been glitched for creative effect. Burn in on plasma screen TVs, dead pixels on LCD monitors and the like. The exploitation of flaws in the hardware is a little beyond the scope of my project though so I’m not really going to focus on that here.

Above, I mentioned the borderline between the user experience and the underlying processes. This is where glitch art lies, at the border between function and dysfunction. Not in a binary sense, but combining the two to create new senses of creative flow and challenge existing paradigms. Menkman describes it as the border between function and dysfunction. My question here is if you are creating work that uses function to flirt with dysfunction (a work programmed to create glitches for instance) is it really dysfunction? I would argue that it is not. The very definition of dysfunction is something that is not working the way that it is intended and by creating a system that is intended to creatively break, even though the end result of that break would be random (within certain confines), that system is still functioning as intended. It’s an interesting paradox and one which I’ll need to look into in more detail (though I can see it being a bit of a rabbit hole!).

This image, at the end of the document, is one I’ve found particularly interesting. Particularly the distinction between a ‘design’ glitch and a ‘true’ glitch. Here, Menkman is a little unclear. I’m not sure whether she’s referring to a ‘design’ glitch in the context I’ve mentioned above or in how she describes it below. Myself, I would place the definition below closer to the ‘order’ end of the spectrum.

“Nevertheless, some artists do not focus on the procedural entity of the glitch. They skip the process of creation-by-destruction and focus directly on the creation of a formally new design, either by creating a final product or by developing a new way to recreate the latest archetype. This can for instance result into a plug-in, a filter or a whole new ‘glitching software’. This form of ‘conservative glitch art’ focuses more on design and end products then on the procedural breaking of flows and politics. There is an obvious critique: to design a glitch means to domesticate it. When the glitch becomes domesticated, controlled by a tool, or technology (a human craft) it has lost its enchantment and has become predictable. It is no longer a break from a flow within a technology, or a method to open up the political discourse, but instead a cultivation. For many actors it is no longer a glitch, but a filter that consists of a preset and/or a default: what was once understood as a glitch has now become a new commodity.” – Ibid

This quote also raises another theme of the paper, that of breaking without breaking. In the example above, the user focusing on the end product and skipping the creation by destruction element is not breaking anything at all. His After Effects glitch filter is working exactly as intended (though the user may experience it differently, as I’ll touch on below). In terms of what might be referred to as ‘true’ glitch, however, the artist focuses on breaking something without breaking it. I wonder where the limits are in such practices. I could quite easily design my project so that it overloads by taking in far too much input (see my previous post!), makes a big nasty noise and then crashes the computer. Obviously though that wouldn’t be practical for the project that I want to create, one which focuses upon the end user experience but gives glimpses, via the medium of glitch, of the processes which underlie that experience. So my question here is: Where does the limit lie? I suppose that this is something that is different for every project. We can glitch something (or program something to glitch) just enough to disrupt the established flows or we can glitch it so heavily as to completely destroy them while still holding together enough to create a flow entirely defined by that glitch. This is a design consideration and one I have yet to decide on. I’ve a feeling it’s going to be defined by the audio with the visuals designed to reflect that.

Further to the above quote, Menkman states:

“But for some, mostly the audience on the receptive end, these designed errors are still experienced as the breaks of a flow and can therefore righteously be called glitches. They don’t know that these works are constructed via the use of a filter. Works from the genre ‘glitch art’ thus consist as an assemblage of perceptions and the understanding by multiple actors. Therefore, the products of these new filters that come to existence after (or without) the momentum of a glitch cannot be excluded from the realm of glitch art.” – Ibid

This quote further defines her meaning when she talks about ‘design’ glitches. What’s really of interest for me here is the part about the audience’s perception of that glitch. Without knowing the underlying processes which have created what they perceive as a glitch they cannot know whether or not that glitch is a genuine break in flow. This for me raises issues relating to actor network theory which I focused on in my essay for Public Making. While this is not a primary concern for my project as I will be focusing on creating my glitches as authentically as possible I think it will be useful to look through that lens at the end user experience. What this alludes to, in essence, is the fact that an important (if not the most important) part of all art is how the audience perceives it. Some work is deliberately ambiguous so as to allow the user to project their own conclusions upon it and some is very clear and obvious. I need to decide where I want my work to exist within that spectrum, though by its very nature I feel that it will be somewhat ambiguous. This also raises issues regarding how obvious the user’s interaction with the piece should be. Do I want it to be very obviously controlled by their movement (which may create heuristic interactions but risks going too far and making it seem trivial) or do I want it to be more mysterious (which carries the risk of it seeming like too much of a black box)? I’ve a feeling that these questions will answer themselves as the sound design evolves but my initial instinct is to veer more towards the black box end of the spectrum. I feel that there’s something magical about those sorts of experiences and that’s the overall experience I want to create.

This project was never one that set out to answer any very deep questions about art and design or to engage with anything beyond creating the end user experience. Through the process of designing it so far, however, I’ve found that that’s something that is virtually impossible to do! A number of concerns have been raised simply through that process that I’ve had to engage with on a theoretical level and I think that, so far, that has been beneficial to the project overall. What I will seek to do now is define the design questions that need answering as simply as possible in order to clarify my way forwards. I’ll do that in my next post.
Also, as something of an addendum, I’ve realised that I haven’t yet written up my thoughts on the first set of theory that I’ve read about, that of user interaction with art installations in museums and galleries. I’ll get around to that soon.

HSS 8123 – FINAL PROJECT – POST 7 – 8 ARDUINO TEST

Uncategorized

This post is in reference to the first full scale test of my installation using 8 Arduinos and 8 ultrasonic sensors.
I was pleased to find that all of the sensors worked together, gave me accurate readings and had no problems functioning in Pure Data. The resultant sound was an incoherent mess but that’s to be expected at this stage because I haven’t done much work on the sound design yet.

This image shows the Arduinos wired up prior to testing.

The two problems I encountered were: Because the Arduinos are all programmed to send readings at 50ms this results in many many notes being put through the DAC simultaneously. This results in distortion as the notes pile up upon each other. I’ll iron this out by slowing down the rate at which data is sent from the Arduinos and carefully finalising the volume settings in PD. The other main problem was that I was starting to notice a bit of interference when two sensors were placed facing one another. This was easily fixed by moving either sensor even a small amount so I do not forsee this being a problem in the final installation given that it will be much bigger than my test space and the Arduinos will be at different heights.

As I mentioned previously, my current thinking is to have each sensor operating a PD sub-patch ranging from something very simple with a very obvious degree of interactivity to something very complex which works much more mysteriously. This will figure into my patch design, my Arduino programming, and my final sensor selection.

Here’s a video of me testing out the 8 Arduino array. It’s very noisy but crucially, everything worked how it was supposed to.

HSS 8123 – FINAL PROJECT – POST 6 – FURTHER VISUAL CONSIDERATIONS

Uncategorized

After having spoken with my tutor I’ve been further considering some of the visual aspects of my piece. Specifically with regard to the code that I’m using for the bulk of the animation. The main issue that was raised was the fact that I’m using nonsense code. My tutor’s position was that by using that I’m almost engaging with a pastiche of digital culture and that’s not necessarily something that I want to do. We discussed a few alternatives including removing the code altogether and focussing on just using glitches. The effect of that on the piece as a whole would be to engage wholly with glitch with both the audio and visual aspects. This could provide some interesting directions in terms of sound design but could also result in the piece being somewhat noisy and difficult to listen to, which I’d rather avoid. The other idea we discussed was to use the code from the Pure Data patch itself. This seems like the way to go as there’s a lot of code in that patch and visually I think it’ll look quite good. The overall theme that’s emerged from these discussions is one of externalising the internal aspects of the patch and the sound itself and using glitch elements rather than focusing wholly upon that.

In terms of audio this raises a few interesting angles. Specifically that the audio should represent the process of making itself. I’m not sure quite how to go about this yet but my initial thoughts are that I could break it down into patches ranging from the very very simple to the very very complex, with a continuation between them so that the development of the sound can be heard.

I’ll be putting the visual elements on hold for the time being because I want the code in the animation to be the actual code used for the audio. I’m also going to have to consider the amount of glitches to put into it so that there is a degree of parity between the audio and the visuals.

My next step is to test the full 8 Arduino rig and write up some critical reflection of what I’ve done so far so I expect that those will constitute my next two posts.

HSS 8123 – FINAL PROJECT – POST 5 – I FRAME DATAMOSHING

Final Project, HSS 8123

As I mentioned previously, as well as datamoshing through data corruption I’ve decided to do some I frame datamoshing. This is a process of removing the I frames from video files which cause the video compression to fail giving a melting effect. To to this I’ve been using Avidemeux 2.56 (later versions don’t work) to find and highlight the I frames and delete them. Like many of the datamoshing processes I’ve worked with before this is very simple but very time consuming. Fortunately, I found an Autohotkey script HERE which automates the process.

I haven’t saved or uploaded any of my I frame moshes because I’ve decided to take a slightly different tack with my visuals, which I’ll detail in the next post. In the meantime here’s an I frame mosh by someone else.

HSS 8123 – FINAL PROJECT – POST 4 – VISUAL ELEMENTS CONTINUED & DATAMOSHING

Final Project, HSS 8123

This post deals with the further development of the visual elements of my piece, specifically Datamoshing. As I mentioned in a previous post I’ve decided to go with a glitch aesthetic for both the audio and visual parts of my piece. In order to engage with that idea in an authentic manner I’ve decided to ditch the previous glitch effects that I created in After Effects and use a process called Datamoshing, essentially corrupting the data of image and video files to create interesting effects.
There are several methods one can use to do this, let’s start with image files.

One of the quickest and easiest methods to glitch an image file is to convert a .jpg into a .bmp (I used Paint 3D in Windows) and then open it in a text editor, in my case Microsoft Wordpad. You can then either choose to edit the resulting random text or just go ahead and save it and open it again in an image viewer. It doesn’t always work but when it does it can be very interesting and effective.

So while the input image might look like this:

The output might look like this:

Or it might look like this:

And you end up with something as drastically different as this:

So how does it do this? Well, it’s to do with the way that Wordpad converts the line breaks. Line breaks are bits of code which tell the text editor where they should break the text and start a new line. Image files don’t contain any line breaks and so Wordpad tries to put them in to clean it up and make it legible in text format. The resultant adjustment to the code corrupts the image file and that corruption produces the effects you can see above.

Corrupting image files is a similar, if potentially more time consuming process. There are a couple of methods for doing this, one involves removing and adjusting the ‘I’, ‘P’ and ‘R’ frames. I haven’t done this myself yet so I can’t really go into detail just yet but I’ll be doing that once I’ve put the final video file together so I’ll cover it then.
The technique I’ve used involves corrupting the code by replacing elements of it with random nonsense using a hex editor. This is a very time consuming process and I spent an afternoon working on one video file in order to get the amount of glitch that I wanted. Then I discovered a Java program called ByeByte which automates the process using a few simple command line commands and so I used that one the rest of the files.

Here’s one of my input files:

And here’s the output:

Overall I’m very happy with the output and while I did run into one or two problems in terms of rendering things in the wrong resolution and one or two other rookie mistakes I’ve managed to fix them and am currently at the stage where I have all of the individual clips for my animation ready. Those clips consist of still images processed using Wordpad and several variations of the scrolling code shown above. Next step is to stitch them together into a 15 minute animation in After Effects, render that and then do some ‘I’ frame moshing to help smooth the transitions between the different clips. I’m hoping to have the animation finished before the end of this week so I’ll post about it then. I’m also planning a post covering Rosa Menkman and other glitch art theorists with some critical analysis of the ideas at work.

HSS 8123 – Final Project – Post 3 – Initial thoughts on the visual elements

Final Project, HSS 8123

As a companion piece to the audio installation I am also putting together an animation which will be projected onto the floor. I felt it was a vital part of designing the overall experience to have a strong visual element. Although the main focus of this piece is on the audio it’s an important part of the user experience to have these visual elements to encourage interaction, add to atmosphere and, I hope, encourage heuristic and playful interactions.
This piece will consist of streaming code and occasional images which will be heavily glitched and effected. The overall idea is to create the sense of being inside a system which is responding to the users input, though not necessarily directly or correctly.
My thinking on this was heavily influenced by the work of Ryoji Ikeda, particularly his Datamatics and The Transfinite.

It is these pieces that made me decide to project the animation onto the floor because I like the way the light from the projections appears on the audience, giving a deeper sense of audience inclusion with the piece through light and shadow.
It will be a static piece and will not be linked to the music directly though it is well known that the brain tends towards creating patterns linking audio and visual inputs and it is this, along with the general thematic links of technologically created art, user interaction and the sense of being inside the piece that (I hope) will combine to make the experience holistic.

Currently I am at the stage of putting together the assets for the animation using Photoshop and After Effects. I am not deeply familiar with either of these programs so I’ve found there to be a bit of a learning curve. Indeed, I’ve never made an animation before so I’m finding the whole thing to be a new, though enjoyable, experience.

The images which will intersperse with the streaming glitched out code are all of roads. I found that this works thematically as it naturally gives a sense of movement and being on a journey. There are many photographs online which are taken from the same perspective, that of standing in the middle of the road and looking into the distance. I’ve collected a number of these, offset the RGB layers in Photoshop, adjusted the colour saturation and brightness and added a little bit of grain distortion. The reason I’ve done this is to tie it in with the overall glitch theme of a system not quite working as it should. The overall effect is one of washed out, abstract landscapes. When run back to back they give a very satisfying effect as the perspective is always the same but the landscape itself changes. Also, the bright colours give a nice counterpoint to the monochrome code. I’ve included some stills below and will upload video once I’ve got some ready.

These will have glitch effects applied to them in After Effects and cut in with the random streaming code. Sometimes it will be many images back to back for one or two frames each and sometimes it will just be one image for a few seconds.

HSS 8123 – Final Project – Post 2 – Initial Testing

Final Project, HSS 8123

This post deals with the initial testing phase of the sound portion of my installation. The main aim was to see if I could get it to receive data from four ultrasonic sensors via four Arduinos simultaneously. If that was possible the next phase would be to see if I could get it working with the full complement of eight. If it was not possible I would have to figure out a new approach to the project whereby I would minimalise the number of Arduinos and have several sensors per Arduino. Secondary aims were to see to what extent the sensors interfered with each other and get a feel for how the piece might sound.

To start with I used the following simple Arduino program to get the sensors sending me the data I wanted.

 

#define echoPin 7 // Echo Pin
#define trigPin 8 // Trigger Pin
#define LEDPin 13 // Onboard LED

int maximumRange = 150; // Maximum range needed
int minimumRange = 0; // Minimum range needed
long duration, distance; // Duration used to calculate distance

void setup() {
Serial.begin (9600);
pinMode(trigPin, OUTPUT);
pinMode(echoPin, INPUT);
pinMode(LEDPin, OUTPUT); // Use LED indicator (if required)
}

void loop() {
/* The following trigPin/echoPin cycle is used to determine the
distance of the nearest object by bouncing soundwaves off of it. */
digitalWrite(trigPin, LOW);
delayMicroseconds(2);

digitalWrite(trigPin, HIGH);
delayMicroseconds(10);

digitalWrite(trigPin, LOW);
duration = pulseIn(echoPin, HIGH);

//Calculate the distance (in cm) based on the speed of sound.
distance = duration/58.2;
distance = map(distance, 0, 100, 100, 0);
if (distance >= maximumRange || distance <= minimumRange){
/* if distance is out of range wait 10ms */
delay(10);

}
else {
/* Send the distance to the computer using Serial protocol, and
turn LED OFF to indicate successful reading. */
Serial.println(distance);
digitalWrite(LEDPin, LOW);
}

//Delay 50ms before next reading.
delay(50);
}

Then I uploaded that to four Arduinos which were wired up to ultrasonic sensors.

The Pure Data program I’m using for this testing is a stripped down and simplified version of the program I made for Live Electronic Performance. I’ve added Arduino connectivity obviously and removed the generative aspects.

This image shows two of the oscillators with their attendant effects. The one on the right is designed to use the Arduino input as a modifier rather than a direct source. It’s hooked up to a click generator with delays that you can see in blue in the second image.

 

This is the click generator linked up to the second oscillator.

This shows the third and fourth oscillators.

This image shows the Arduino input section and was designed by John. I’ve just taken that, multiplied it by four and given each one a different port number.

Here’s a video of my housemate testing out the 4 Arduino array.

I was pleased to find that this testing phase was a success on all counts. After a couple of small problems with setup (entirely down to me not wiring one of the sensors correctly) the system worked without any problems for a long period of time. I found that I had to really work quite hard to get the sensors to conflict with each other so I’m fairly confident that that won’t be too much of a problem so long as I’m careful with how they’re positioned. I’ll double check this when I come to do more testing. The sound itself was fine for now but I’ll be making it a lot more nuanced once I can confirm that the 8 Arduino setup is definitely going to function the way that I want it to.

Overall I’m confident enough with these results to go ahead with doing a full test of an 8 Arduino rig next week.

HSS 8123 – Final Project – Post 1 – Overview

Final Project, HSS 8123

For my final project I’ve decided to combine two parts of the course I’ve really enjoyed, the multi-speaker setup I used for my piece in semester one and work with Pure Data which I did a lot of in semester two. I’ve decided to create a system using Arduinos and sensors strung around eight speakers in order to track movement within that space and feed that data into Pure Data which will then translate that into musical sound. Essentially it’s a generative music system which relies on audience interaction as its input. It will be accompanied by an animation that I’m putting together using After Effects which will provide a visual focal point and contribute to the sense of ambiance. Initially I was planning on showing it on two screens but having considered it further I’ve decided to project it onto the floor instead.

Below is a floor plan which is very rough and not to scale but should be enough to give the overall idea. As I mentioned above I will not be using screens but will instead be projecting onto the floor.

I’m going to use a MOTU to split the output from Pure Data into eight and feed that output into the eight speakers.

In terms of Arduinos I’m planning on using eight of them powering one sensor each. This is a decision made for a number of reasons: Firstly, it will be cheaper for me to use USB extension cables rather than spending a lot of money on many many feet of connecting wire to take the signal back to one Arduino situated at the computer. Secondly, my Arduino programming skills are quite weak so it would be easier for me to write a simple program for each Arduino rather than writing a much more complex program which splits the data from each sensor. One the downside this means more moving parts and hence more than can potentially go wrong but the initial tests that I’ve done suggests that this is an approach that could actually work rather well. I’ll detail those tests further in a future post.

The sensors that I’ve used so far in testing have all been ultrasonic sensors. The reason for this is that they’re simple and reliable and give good detailed data. I will be looking into using other sensors once I’ve tested an 8 Arduino setup and have had some time to further define what I want each module within the Pure Data program to do. Again, I will go into this in more detail when the time comes.

In terms of location within Culture Lab I would like to put it in the live room of the sound studio. It is a very large piece which relies on ambiance and sound and in order for that sound to be heard correctly it would ideally need to be in a sound proofed room. The projected animation also relies on having a darkened room and again, that is something that would be compromised if it was put in a larger, potentially brightly lit space.

So far my equipment list consists of:

1 x Alienware PC running Pure Data
1 x Motu with 8 outputs
8 x Arduino Unos
8 x Genelec 8040As
8 x Speaker stands
8 x HC SR-04 Ultrasonic sensors
1 x Projector
Approximately 50 feet of connecting wire (more than I think I’ll need).
Assorted speaker cables, power cables, extension leads etc.

HSS 8121 – PUBLIC MAKING – REFLECTIVE POST 4 OF 4

HSS 8121, Public Making

Scarlet discussed Tim and John’s paper entitled ‘Artistic Strategies for Working with Museum Collections, Technologies and Publics’. Her topic was ‘On the evidence of the paper how successful was the project?’
This discussion involved a lot of input from Tim who told us a lot about the  practical creative interventions discussed in the paper named Interglacial, Erratics, Salient and Re-Entrant. We talked about what constitutes success in art which is something I’ve been considering myself recently. The sample I always use is Justin Bieber. Now, I don’t like Justin Bieber but it’s impossible to deny that he and his team are very successful. I think it’s important to consider what a work sets out to do and then ask if it achieves those aims. Justin Bieber and his team set out to create catchy pop records with a certain sound that appeal to a certain demographic and create a public image that goes along with that. They achieve that admirably and so by any objective metric we must admit that Justin Bieber’s ‘art’ is very successful. A measure of success could simply be if the artist is happy with the work or if the artist considers that the message of the art is clearly put across to the audience. Obviously art is something that is subjective but personally I think it’s interesting to try and apply objectivity to it and analyse it from a perspective which tries to define and consider the concrete goals of the work.

Kiran’s topic was the same as Scarlet’s and allowed us to further the discussion began above. The way it was structured was quite interesting as he created a Powerpoint presentation on the fly and filled it in with answers provided by Tim. Public making in the context of academic discussion.

Jasmine’s topic was also based on the paper and she was asked to: ‘Describe public making activities and how they were presented’.
She went through the works described in the paper and we talked about how the projects evolved in tandem with the audience participation. Tim told us about that element of the project and how each participant added something unique to the project and took it in directions which were not expected. This harked back to Liam’s, Pete’s, Yaxi’s and Kyp’s seminars where we considered the impact that the public and groups of people can have on publicly made projects. I think this is an important aspect of Public Making. Actively involving the public and other people. After all, the public will always be involved with work that is in the public sphere, whether that be as an observer or as an active or passive contributor. It is important to consider that perspective and that potential for input when we talk about Public Making.