HSS8121 – Post-Disciplinarity

The themes of post-disciplinarity that Gabriella Arrigoni explored in her other lecture became another fundamental part of my proposed piece.

In her lecture, Arrigoni discussed a number of agencies that draw on creative practitioners in work that extends beyond art. For example, the Wellcome Trust (http://www.wellcome.ac.uk/) commission artists to produce works and aid in research that is related to health. These agencies tend to have a holistic view on practice, rather than a target-based one that an arts organisation might; they value the overarching contribution to the field that creative practitioners provide, and are confident that the research outcomes of this contribution will become clear over time rather than being immediately obvious.

This post-disciplinary approach encouraged me to think outside the immediately relevant areas of my practice such as open-source digital technologies, which is a subject I return to time and time again. During Transmediale I attended a number of talks, workshops and conversations about the subject of privacy and surveillance in the digital space, which was exemplified by their Anxious to Secure stream. I have a long-standing interest in the subject of surveillance in the digital space, having followed groups such as the EFF (https://www.eff.org/) and the Open Rights Group (https://www.openrightsgroup.org/) for some time. While artistic responses to surveillance culture are absolutely nothing new, in the spirit of post-disciplinary practice I chose to explore themes of surveillance in my final work through looking at facial capture technology. Facial capture technology by government agencies have been investigated by artists looking to subvert this technology, for example URME (http://www.urmesurveillance.com/), a ‘generic face mask’ designed to thwart surveillance cameras linked to massive facial recognition databases by transforming your face into that of Leo Salvaggio, the founder of URME.

Arrigoni’s presentation stressed the importance of working outside of your discipline, and this is something I will carry forward to my final piece, which will touch on the subject of surveillance by employing some techniques not uncommon to ethical hackers and private investigators.

HSS8121 – Final Proposal and Failure

Based on a session we did for DMS8013 on computer vision and face detection, my proposal to respond to the open call for this module involved a system designed to capture faces of members of the public in a gallery space and use them to create an ‘average face’ of gallery-goers in real time. However, I noticed early on in using openCV-based facial recognition software that it was very prone to error, recognising a number of background features of an image as faces and registering them in the program as such.

While this was initially a cause for concern, as this meant the program would have a number of false positive results that would have to be filtered out by another part of the program in order to produce ‘good’ results, I remembered the kinds of ‘errors’ that had been shown to us by Alexia. Alexia’s presentation of error was one of humour, with a number of her video examples in particular being presented with an almost sardonic attitude towards her former self. It was for this reason that I chose to incorporate these computer vision errors into the fabric of my work. Rather than design a system specifically to exclude these errors, which would potentially introduce more failures, and attempt to brush over the fact that the system I would build would be ‘incomplete’, I chose in the proposal to show the misrecognised faces as part of the ‘average face’, incorporating incidental background materials.

Incorporating error in this manner also allowed me to use humour as an artistic device. My intention was that audiences seeing their faces mashed up with random mis-recognised faces, as well as some very poorly lit faces and people who did not realise they were having their faces captured, would lead to quite a few laughs, but also to draw attention to the work itself. As the work I planned is fundamentally about surveillance, the levity the topics raised should not be downplayed by the work with humour, but the integration of the kinds of failures Alexia discussed in her presentation have become a major part of the proposal I submitted for this module.

HSS8121 – Practice Based Research

Gabriella Arrigoni’s session on Practice Based Research was particularly interesting for me as a creative practitioner who currently focuses most of their time on the creation of works, with relatively little time invested in theoretical writing around those works. To me, this approach to research as practice based is very much in the same vein as the themes of Research through Design that John introduced during his lectures. Culture Lab has a strong emphasis on a practice-based approach to conducting research more broadly too, and Gabriella’s session served to underline the key concepts of this type of research.

The production of prototypes and artefacts as research is something I was involved in as part of Unbound, a work that evolved over the course of a week and was passed between a number of practitioners at Culture Lab. Each practitioner was given a working day and a set of resources with which they could interact to form the work. When I took part in the work (about halfway through the process), there were already a number of disciplines involved in the work, including databending, painting, coding in processing, video work, physical sculpture and audio work. During the day I worked on Unbound, I filtered the work that previously existed through my own sets of practices, and then at the end of the day offered up the resulting artwork (and documentation) for the next person. This resulted in a shifting prototype which exhibited the individual hallmarks of each practitioner that had come into contact with it.

Importantly, Gabriella documented the process at a number of stages and conducted an interview with me during my work day about the kinds of processes that the work had undergone and the reasons for my selection of those processes. This was then formulated for showing at the Loops Layers Lines show at Culture Lab. This practice based research used actual practice as a basis upon which to explore themes of cross-disciplinary collaboration, and it was a very enjoyable piece to take part in. The prospect of conducting research not by reading but by doing is something that has always excited me; as a creative practitioner I thrive on the act of creation, and Gabriella’s explanation of using this AS research is something I will be taking forward into my final piece.

HSS8121 – Video Analysis

John’s session on Video Analysis was particularly useful to me, especially as I am beginning to have far too much documentation of old gigs etc. to do anything meaningful with.

For his session, John detailed a number of useful features of video analysis, as well as the kinds of established codes that operate around the field. Video analysis involves the observation of naturally occurring phenomena on a broad scale (ideally a camera that can apprehend the whole environment), much like Ethnography. This observation can be transcribed play-by-play, or simply observed to note any phenomena that become apparent only when the situation is viewed in this way. This can be used as a way to analyse in minute detail the situation that unfolded, picking out emergent characteristics that can inform your understanding of these events.

This is not something I had ever considered doing to my existing documentation. As I often perform for events that are video recorded for funding and research purposes (such as Algoraves for AHRC-funded TOPLAP) I often see video documentation of my performances without having any explicit use for it. From this session, John explained how these features could enhance understanding of the documentation I have available, and formulate it as a valid research concern.

The exercise portion of John’s session involved examining some raw documentation we had of our own work. I examined a static video taken from the back of a live coding set I performed at Power Lunches, as well as the documentation of my undergraduate final recital at culture lab (https://www.youtube.com/watch?v=OcwS6x4KRIU). The class then dissected the video of the session, noting when the audience performed certain actions (dancing, chatting, paying/not paying attention) and when I performed certain actions (executing lines of code, nodding my head, using body language). While I was tacitly aware of these features of my performances, I had never considered taking an analytic approach to the minutiae of performing for the public, and this has the potential to become a broader research concern.

I am aware that these techniques have been performed by Alex McLean, particularly on his video performing with duo Canute at Jubez Karlsruhe (https://www.youtube.com/watch?v=uAq4BAbvRS4), so video analysis of live coding concerts is evidently an established research concern among academics in the field.

I’d like to do some further research into the kinds of software I can use to do video analysis on existing documentation, one example I found was atlas.ti (http://atlasti.com/); however i’d like to keep my commitment to using free and open source technologies, so I am currently searching for an open alternative.

HSS8121 – On Failure – Alexia Mellor’s Presentation

For her HSS8121 presentation, Alexia Mellor gave an account of her encounters with failure in producing artworks. Showing a number of picture and (sometimes excruciating) video examples, Mellor detailed examples where projects had not come off as intended, some with hilarious results. While Mellor’s talk about failure could have been a pessimistic one about damage control and how to manage failure that she considered a bad thing, Mellor’s talk focused on productive uses of failure, and how best to approach failure as a creative exercise.

The (as is a custom in HSS8121 sessions) task Mellor gave us in actively trying to create a failure was very useful. While a number of tasks focused on producing bespoke briefs for potential clients, this session encouraged out of the box playing around the explicit notion that nothing useful was to come out of the session. While I didn’t necessarily learn any new skills per se, this emphasis on play for play’s sake as a creative process (or indeed anti-process) opened up a number of avenues to explore I had not previously considered as valid options!

This session was on similar themes to the sentiment of a group residency I did at Access Space in Sheffield in November (https://seancotterill.com/2015/11/26/sonic-pattern-residency-access-space/), where the emphasis was on general process of creativity, rather than on producing a specific outcome. While this residency did end in a public performance specifically because we had chosen to produce work through it, there was on emphasis on the production of this work specifically.

From this residency I produced a system to perform live sonification on the movement of an embroiderer, and other projects included Cat++ by Nora O’ Murchú (http://thecreatorsproject.vice.com/en_uk/blog/cat-visual-coding-language)

I went on to use the types of techniques Mellor described in her presentation for my final research proposal for this module, which I will detail in a later post.

DMS8013 – Presentation, Nudity, Transparency

charlie4scaled

‘tome.’ is an intense piece, dealing with death, anglican funerals and embodied memories. As a result, a number of aesthetic decisions were taken to compliment the levity of the performance, one of them being the decision to have Charlie perform topless.

Originally (and for RE/CEPTOR), the plan was for Charlie to perform wearing a long-sleeved shirt, with all electronic components and wires (except for the accelerometers themselves) being hid underneath the shirt. We realised late on in the development stage that hiding the mechanics of Charlie’s interaction with technology we were placing a barrier between the audience the the essence of the work. Without knowledge of the fact that Charlie was wearing a sensing unit, the performance came across as bizarrely technokinetic, and the method of interaction became a source of intrigue in itself (‘how is he doing that?’, ‘is he wearing some kind of sensor?’, ‘is there a camera somewhere?’), which became a barrier to watching the work itself. While motion-sensing technology is used in a very upfront way, purposefully obscuring the ‘flesh and blood’ of this technology was counter-intuitive, so we took the decision to make abundantly clear exactly what the technological element of the performance involved.

This is in line with my own personal philosophy of openness into process – I like to expose the inner workings of things where possible, whether it is circuits in installations or code in music performances. The bare-bones nature of this technology also looks arresting; a black cross of tape across Charlie’s chest attaching a ‘central unit’ to him along with wires snaking around his arms to his wrists. While the decision of partial nudity was one we arrived at very late, I feel it enhances the performance by stripping some of the potential ‘bullshit’ away from the digital interaction approaches I have taken to realise it.

DMS8013 – Interaction and Live Foley

DMS8013 – ‘Live foley’ Water

A significant part of Charlie’s dialogue during the performance focuses on an interaction with a river under moonlight, and for part of the performance I wanted to give Charlie the ability to simulate water sounds by performing a hand gesture reminiscent of splashing water. This would then be used during the most intense section of the performance when Charlie is roaming around the audience and using rapid hand movements to create multicoloured strobing effects with previously discussed movement-light transfer functions.

I chose to implement this by writing a function in SuperCollider to analyse the difference between subsequent sensor values received by Arduino


(
var prev = 0;
q = { |in|
var result = in - prev;
prev = in;
result;
result = if ( result < 300, {0},{result});
};
)

This function compares the current value received to the previous value received, and if difference in values is greater than 300, the overall difference is returned, and if the difference is less than 300, 0 is returned. This forms a primitive gesture detection, which only returns values when a significant movement is detected.


OSCdef(\xdiff, {|msg| ~splash.set(\amp,q.(msg[1]))},\x1);
~splash = {arg amp = 0.7; Pan2.ar(PlayBuf.ar(1,d[\water][2],1,1,0,1,0)*Lag.kr(amp/100,0.5),SinOsc.kr(0.1,0,0.7))};

The result of this function is then fed into ~splash to control amplitude, and a short ‘Lag’ UGen is applied to these values to smooth out the transitions between values. ~splash is a repeating sound file of a contact microphone recording of the free-running river derwent (near where I grew up as a child), and the modulation of amplitude by this function results in short ‘splash’ amplitude curves, which make the sound of the running river simulate the sound of water being splashed. When no gesture is detected, there is no sound.

This still stands out to me as an odd success, I expected to use some gesture detection to scroll through a soundfile of water being splashed in order to ‘create’ a splash, however simple amplitude modulation gestures have proven to be much more successful.

DMS8013 – Interaction: Lighting

DMS8013 – Movement and Granular Synthesis

Once movement data had been received in SuperCollider, I investigated a number of methods of using this data as part of the performance, one being the use of granular synthesis in relation to movement.

A very handy implementation of this came in the form of SuperCollider’s Warp1 UGen (http://doc.sccode.org/Classes/Warp1.html). Warp1 takes a sound file and repeatedly scrolls over a very short section of a given location in that sound file at a particular speed, and can be used to do anything from slowing down and subtly warping sound files all the way to wild, random sputtering.

The ‘text manipulation’ section of my SuperCollider code (as deonted by a code comment around line 150 https://github.com/theseanco/tome/blob/master/SuperCollider/SHOW%20CODE/Sound.scd) applies a linear transfer function to X, Y and Z axis of accelerometer 1 to the position of a sound file, one per axis. The sound file in question is an excerpt from Charlie’s speech at the beginning and end of the performance:

‘Leave floor, leave shore, leave surface, leave form’

with some silence at the beginning and end of the sound file. During the ‘prayer’ section of the performance, where Charlie is most explicitly performing with this digital interactive system, I gradually bring in these three sound files, in the end resulting in three identical mantras being ‘read’ by Charlie’s movement.

A characteristic part of this interaction between movement and sound scrolling is made exciting by Charlie’s attention to detail in the way in which he interacts with the sound. Charlie’s movements convey an almost tactile sensibility in dealing with the sound, building a relationship with it and treating it as an object, initially playing gently with audible words, and building up (along with light manipulation of corresponding intensity) into scattered, discordant babbling.

DMS8013 – Interaction and Granular Synthesis

DMS8013 – Movement and Granular Synthesis

Once movement data had been received in SuperCollider, I investigated a number of methods of using this data as part of the performance, one being the use of granular synthesis in relation to movement.

A very handy implementation of this came in the form of SuperCollider’s Warp1 UGen (http://doc.sccode.org/Classes/Warp1.html). Warp1 takes a sound file and repeatedly scrolls over a very short section of a given location in that sound file at a particular speed, and can be used to do anything from slowing down and subtly warping sound files all the way to wild, random sputtering.

The ‘text manipulation’ section of my SuperCollider code (as deonted by a code comment around line 150 https://github.com/theseanco/tome/blob/master/SuperCollider/SHOW%20CODE/Sound.scd) applies a linear transfer function to X, Y and Z axis of accelerometer 1 to the position of a sound file, one per axis. The sound file in question is an excerpt from Charlie’s speech at the beginning and end of the performance:

‘Leave floor, leave shore, leave surface, leave form’

with some silence at the beginning and end of the sound file. During the ‘prayer’ section of the performance, where Charlie is most explicitly performing with this digital interactive system, I gradually bring in these three sound files, in the end resulting in three identical mantras being ‘read’ by Charlie’s movement.

A characteristic part of this interaction between movement and sound scrolling is made exciting by Charlie’s attention to detail in the way in which he interacts with the sound. Charlie’s movements convey an almost tactile sensibility in dealing with the sound, building a relationship with it and treating it as an object, initially playing gently with audible words, and building up (along with light manipulation of corresponding intensity) into scattered, discordant babbling.

DMS8013 – Python

DMS8013 – Arduino to OSC via Python

https://github.com/theseanco/tome/blob/master/Python/Showcode/SerialToOSC.py

Based on some code I wrote for the ‘Brick of Internet’ project, I have made a simple Python script to be run in the terminal which will take Arduino serial data in the form

msg= int int int int int int

and convert it to an array of values (using a space as a delimiter), which are then converted to the correct form (in this case ints) and sent to SuperCollider as individual values to be dealt with from there.

This code can easily be adapted for any other purpose by changing how the data is case in python (as an int, float, string etc) and changing the number of OSC messages that are sent smiply by copying and pasting the correct number of data fields that are needed (starting from index 1, with index 0 of the array being the start of the message, which is recognised as the start by a regular expression to check for garbage serial strings). 57210 is the default language port for SuperCollider, and this should also be changed to suit the application

In my case, I am receiving six values, X Y and Z for both accelerometers.

The address of these values is specified in each message, as so:

oscmsg = OSC.OSCMessage()
oscmsg.setAddress("/x")
oscmsg.append(int(result[1]))
c.send(oscmsg)

This python script is ideal as once it is running in the terminal in the background it will prompt the Arduino to promiscuously attempt to receive data and pass it on to SuperCollider until it finds data, so if the suit drops data while Charlie is wearing it, the script will not crash or have to be restarted.

Return top

INFORMATION

Change this sentence and title from admin Theme option page.