DMS 8012 – Live Electronic Performance – Post 1 – The Story So Far

DMS 8012, Live Electronic Performance

This post is a reflective one dealing with some of the topics we’ve covered so far in this module, specifically the ones which I’ve found most useful and the ones which have influenced my project.

Session 1 covered a couple of things in Pure Data, specifically various ways of playing back samples and generating tones. We also put together a system using select nodes, metros and randoms which randomly triggers an automated volume fader. This was particularly useful for me as select nodes can be used to alter the frequency at which sounds are triggered, something that has been central to both my piece for Public Making and the generative system I am building for this module.


This is the aforementioned system which triggers an automated fader. Mine features 8 oscillators which create a complex layered tone.


This is a very simple patch which plays a sample from the disk. Nothing we haven’t looked at before but we did some interesting stuff to develop it.


This is a development of the above patch which triggers and loops a sample, in this case 0.wav. The actual looping part of this was something I found quite useful but apart from that this wasn’t really anything we hadn’t looked at before. It appears that I’ve made some notes here which I’ve started to develop on the patch below.


What we did here was alter the above patch so that it would randomly play one out of three files. I haven’t added a line object to smooth out the fade, I guess I must have run out of time.
Overall this session was one with a mixed degree of usefulness. It was worth it for learning about select nodes and trigger bang bang (etc) nodes, both of which are central to the Pure Data work I’ve done since then.

Session 2 involved making field recordings at various places in an around the university, loading them into the computer’s RAM and scrubbing through them.


This is the patch that we worked on during this session. I’ve made some notes on it and integrated some FX in order to smooth playback and create a tape like slow down and speed up effect.
I found the concept of loading things into the RAM to be pretty useful and incorporated it into my VLF radio system which I built for public making. It’s an interesting and convenient way to use longer files, enabling me to play ranges of the sample rather than the whole thing as a one shot.

At this point our study was interrupted by the strikes and the Easter holidays so I spent the intervening seven weeks developing my generative audio system which I’ll cover in detail in a later post.

During our first session back we performed three pieces; Radio Music by John Cage, From Unknown Silences which is part of Four Meditations for Orchestra by Pauline Oliveros and Dot Piece by John Stevens. These pieces were interesting for me because I have not previously encountered music which is scored in this way. I am not really a music theorist in the sense that I don’t use western notation and only use the bits of music theory that are useful for my practice such as scales and modes.

From Unknown Silences (1996)

This is an invitation to create and play single independent sounds — no melodies. One unique sound at a time.

Sound means any type of sound including pitched or non-pitched (noise) to make an inclusive and very wide variety of sounds.

Silence before — silence after each sound, listening for the beginning and ending of each sound. Each sound and silence as dissimilar as possible in every aspect .

Duration of sounds and silences are free — silences need as much variety in duration as sounds.

Articulations and dynamics include the full range that is possible.

The piece is over when there is no more sound.

An optional soft ensemble performs the same score at a very low dynamic level (ppp) moving slowly in and around the audience and adjoining spaces such as the foyer or other entry ways. Depending on the duration of the piece the performers gradually make their way to the stage to be with the other players for the final silence.

The above is the score for From Unknown Silences. To perform this piece we each used either an instrument provided by Tim or our own instruments/patches that we’d been working on. As my patch is generative and essentially does whatever it wants I was unable to use mine so instead I used Tim’s Teenage Engineering OP-1 synthesiser (which was very cool!). We had some problems during our first performance because we were not really playing as an ensemble and the result was rather cacophonous. After a short break we tried again and really focused on listening to one another and responding rather than doing our own thing and that really seemed to help. The piece went on for a minute or two longer than it probably should have but overall it was a much more satisfactory performance.

HERE is a link to a YouTube video of the orchestra of the WIU School Of Music performing From Unknown Silences and doing a decidedly better job of it than we did.


Shown above is the score to John Cage’s Radio Music. This piece is played using radios with the numbers representing the frequency to be tuned to and the lines representing breaks. Players can play the frequencies and breaks for as long as they like and the piece is finished after everyone has played the last frequency. We also performed this piece twice, the first one being more free form and the second one having a tighter time limit. Of note for me was the sheer variety of sound that came through. We heard everything from snippets of popular music, classical music and speech. Of course there was an awful lot of static and white noise present too. This piece is interesting because its very nature means that every performance is unique and it changes depending upon where in the world it is performed. In a sense I suppose you could call it somewhat generative.

HERE is a link to a YouTube video of an ensemble performing this piece. Again, doing a decidedly better job of it than we did.

The final piece we performed was Dot Piece by John Stevens from his book Search and Reflect. I was unable to find a copy of this particular score online but it essentially consists of creating the shortest sound possible and playing that sound in response to and as part of the ensemble. Of interest to me here was its emphasis (for me) upon the spaces between the sound. It’s an old adage in music that ‘silence is as important as sound’ and for me that kind of minimalism is very important in my work as a composer/producer.

HERE is a link to an ensemble performing Dot Piece.

Overall this session was very interesting for me. Admittedly not so much in a way that directly influenced my practice for this module but more in that it encouraged me to think about music and sounds I hadn’t previously considered. In that way this kind of indirect influence is something which will no doubt influence my practice in more subtle ways.


Most recently we revisited the project we did during the induction week where we used the oscillators on a 4093 NAND chip to create sound (diagram shown above). We wired them up to potentiometers and generated variable tones with them. I very much enjoyed this activity during induction week so it was good to revisit it. I was able to put together 3 separate oscillators and have them play in unison which was very satisfying.


Also during this session John showed us a piece named Ohm-My-God which utlised a bowl of random components, a 9v battery, some spoons and some connecting wires. He agitated the components to create random circuits and probed the bowl to give us an audio output. This piece was very interesting because it highlighted that pieces can be based on very simple ideas but still yield conceptually interesting results. It also raised issues to do with performance gestures and we were asked to think about how our movements reflected the music we were performing and how that relates to the audiences experience.

Study One for Random Circuits by J.M. Bowers (which uses a variation on the above technique) from his website.

HSS 8121 – Public Making – Post 5 – The Patch

HSS 8121, Public Making

In this post I’ll be discussing how I’ve gone about creating the Pure Data patch, at least to its current functioning beta state. I’ll be looking to tweak things here and there before it’s actually finalised and I’ll discuss those changes in later posts.

The first thing that I needed to do to create this patch was source samples of the sounds that I’m using. I found those samples in various locations including YouTube, The INSPIRE project and NASA’s Soundcloud page.
Obviously it would have been preferable to acquire the samples myself by making recordings of VLF radio directly but since I don’t have access to a VLF radio system that isn’t possible. Even if I did I would have found that any samples I could take would almost certainly not be clean enough for the purposes of this project.
That said, one of the tweaks I’m looking to make is to create my own versions of those samples myself using synthesizers, DAW’s and so on. The sounds themselves are pretty simple so it shouldn’t be too difficult for me to achieve this. What this process of creating the samples myself will afford me is the freedom to speculate on how VLF signals would behave on an alien planet.


The above section of the patch is essentially its ‘brain’. It uses a combination of metros, randoms, drunks and select nodes to generate random bangs at what is generally a fairly slow rate (though occasionally that rate will increase to reflect periods of increased activity) . I have deliberately left some of the outputs from the select node empty because I do not want it to trigger something every time it is banged.


Most of the select node’s outputs trigger sferics. As I mentioned previously, sferics are by far the most common signal received by VLF radio systems and as such are triggered the most frequently. I have loaded a long sample into my computer’s memory and the number ranges reflect which part of the sample is triggered. Also of note here is the main DAC output.


Every time a sferic is triggered it is delayed for ten seconds and then fed into a new random node and a new select node. From there it will sometimes trigger tweeks and whistlers (which are sometimes caused by the same phenomena that cause sferics). Tweeks are more common than whistlers and so are triggered more often. Whistlers are less common and come from further away so there is another ten second delay before they are triggered. These samples are saved as separate files rather than loaded into memory.


This section triggers chorus waves. I works on the same principle as the sferics section in that it calls upon a longer sample loaded into memory. These are relatively rare phenomena and so happen less frequently.


Every time a chorus sample is triggered a solar flare sample is triggered. This is because chorus waves are caused by solar flares. Again, these are individual files.


This image shows the chorus and sferics samples loaded into memory.


This shows the plasmaspheric hiss and Shumann resonance sample players. These samples are persistent and so are not triggered by the same part of the system as the other samples. Instead they are triggered separately as soon as the program is run.


This is the meteor section. It runs from the main select node and triggers short samples of radio emissions from meteors entering the atmosphere. These are also relatively rare and so are not triggered as often as some of the other phenomena.


This image shows the FX section. I have added delay and reverb to chorus waves, whistlers and meteors in order to increase the density of the samples reflecting increased activity of the phenomena.

The patch, along with samples can be downloaded HERE.

HSS 8121 – Public Making – Post 4 – The Sounds

HSS 8121, Public Making

As I mentioned in my previous post I have decided to focus on creating a Pure Data patch which replicates the behaviour of signals received by VLF radio systems which are used to listen to the natural radio emissions of the Earth and natural terrestrial phenomenon such as lightning. It is a generative system which is intended to represent the radio emissions of our fictional Earth-like planet. After discussing the project with some experts on VLF radio I feel I have finalised the sounds that I’ll be using and decided how they are going to behave. I will cover the Pure Data patch itself in my next post but for now here’s an introduction to the sounds I’m using, how they are created and how they behave.

Sferics
Sferics are extremely common radio emissions produced by lightning. They can range from a few hertz to millions of hertz and can be picked up from up to and over 3000 miles away because those emissions bounce between the Earth’s surface and the ionosphere. They sound like clicks or pops, similar to dust on the surface of a vinyl record and are characterised on the spectrograph by vertical lines indicating all of the frequencies arriving at the reciever at once.

Tweeks
Tweeks are sferics that have travelled a very long way (up to 20000 miles) from their source to the receiver. This distance means that the signals undergo a process called dispersion which means that their frequencies do not all arrive at the receiver at the same time. The higher frequencies travel faster than the lower frequencies and the result is a musical, falling tone. Note their distinctive low end hooks on the spectrograph.

Whistlers
Whistlers are a relatively rare form of sferic that has traveled even further than a tweek wave and so its dispersion is even more pronounced. Under the right conditions the radio emission from a lightning strike can travel along the Earth’s magnetic field lines which takes them into outer space through the ionosphere and magnetosphere and back to the Earth’s surface where they can be picked up by radio receivers. This huge distance means that the lower frequencies arrive even later than those of tweeks and the resulting audio artifact has a distinctive high to low whistling quality. This results in a very distinct curve on the spectrograph.

Auroral Chorus
As the name suggests, auroral chorus are emissions linked to the aurora. The aurora itself is caused by solar flares which cause many billions of high speed particles from the sun to wash over the earth. Some of these particles hit the Earth’s magnetosphere and cause it to deform and pulsate. These pulsations result in radio emissions which we hear as auroral chorus. They are represented on this image by the dense curved lines in the top part of the graph.

Solar Flares
Those particles from solar flares which pass through the magnetosphere and are instead diffused by the Earth’s upper atmosphere result in the phenomenon which we on Earth see as the aurora. These particles also produce radio emissions which can be picked up by VLF receivers. They sound like a wave of static, slowly building up before rescinding.

Plasmaspheric Hiss
Also linked to solar flares is a phenomenon known as plasmaspheric hiss. This little understood phenomenon is thought to be the result of chorus waves travelling at near light speed which are deflected into the Earth’s upper atmosphere. They are then diffused by the atmosphere resulting in their structureless spectral properties and a sound akin to swelling radio static or breathing. The middle chart below represents plasmaspheric hiss.

Meteors
Large meteors known as fireballs can also generate radio signals. As they enter the atmosphere they leave a trail of high energy plasma which excites electromagnetic oscillatory modes, including those at ELF and VLF radio frequencies. This results in a clean, chime like tone. In this image, the short horizontal lines represent meteors entering the atmosphere.

Schumann Resonance
Schumann resonance is a global electromagnetic resonance excited by lightning strikes resonating between the Earth’s surface and the ionosphere. This results in standing waves of 7.83 Hz. This frequency is below the range of human hearing and so it is not generally audible although there are times when it’s harmonics can be heard under the right conditions in certain locations. From time to time it will modulate and those modulations can be described using spectrographic analysis. I have not included an audio file of the Schumann resonance because it is below the range of human hearing however I will be using it in the patch to subtly modulate some of the other sounds which should mean it’s subconsciously picked up by the audience.

 

 

 

 

 

HSS 8121 – Public Making – Post 3 – Diversion

HSS 8121, Public Making

Recently I’ve been speaking with one or two experts on VLF radio. Their input into this project has been hugely helpful in terms of helping me understand how the signals received by VLF radios behave. I’m currently working on integrating this information into my Pure Data patch.
VLF radio and the audio artifacts which are inherent within those systems are a very complicated and wide ranging area of study. As such I feel that it’s best to drop the elements from the Van Allen probes and focus on that. My reasoning is that because scientific accuracy is my top priority I feel it would be too much of a stretch to learn about both VLF radio and planetary radiation. I also feel that this decision makes the project more focused. The current situation is that my system will recreate, as accurately as possible, the natural radio emissions of an ‘Earth-like’ planet and its planetary phenomenon.
With that in mind the sounds I am working with are: Electrostatic hiss, Schumann resonance, lightning, meteors, sferics, whistlers, chorus waves, solar flares and tweek waves.

Stanford University’s VLF radio antenna in Antarctica.

HSS 8121 – Public Making – Post 2 – Continuation

HSS 8121, Public Making

After visiting Allenheads (which sadly I was unable to do), the group felt as though many of the ideas being mooted were disjointed and unsatisfactory. My own idea for a generative and scientifically accurate planetary audio system was an exception to this and so that idea, along with one or two others, was kept and the group decided to discuss new angles from which to approach the project.

After much discussion it was decided that the installation would consist of an inflatable planetarium set up in the ballroom in Culture Lab containing a telescope. Upon looking through the telescope the observer will see and hear performance pieces streamed in real time from Allenheads. The (domed) outside of the planetarium will have projections beamed onto it which make it look like a planet, the screens in the ballroom will have alien landscapes and other extra-terrestrial themed imagery projected onto them and my audio system will be running as persistent ambient noise at both locations.

Currently, I am working on perfecting the volume and frequency at which the chosen categories of sonifications are triggered by the generative audio system. The sounds I have chosen are: Plasmaspheric hiss, whistler waves, tweek waves, solar flares, comet tail radiation, terrestrial lighting, microseismic hum and meteors. All of these sounds have been selected either because they are sonifcations derived from data picked up by NASA’s Van Allen probes which are used to study the radiation belts which surround Earth or because they can be picked up by VLF radio receivers which pick up the radio emissions produced by the Earth itself and by planetary phenomenon such as lightning. Many of the sounds picked up by ULF receivers are outside of the normal range of human hearing and as such have been altered so as to bring them into our audible range. As I mentioned in a previous post the sounds derived from the Van Allen probe readings were not originally audio and so have been converted. I feel that using these two categories of sound gives a good overall balance of sounds both emitted and received by the Earth and by extension our Earth-like planet.

I will go into more detail about the sounds themselves and the process of building the Pure Data patch in future posts.

HSS 8121 – Public Making – Post 1 – Overview

HSS 8121, Public Making

For our Public Making project we decided that we would like to work with Allenheads Contemporary Arts on their Beyond project.

From their website:

BEYOND is an ambitious project involving an eclectic mix of artists, curators, scientists and audiences to take advantage of its new on-site astronomical observatory and to consider the word BEYOND as an open ended starting point for discussion, research and the creation of new work.
The project will offer regional and international artists the opportunity to take part in a stimulating residency programme designed to generate a series of public outcomes including exhibitions, workshops, performances, screenings and symposia.
ACA is offering artists and audiences the resource and facilities of the new astronomical observatory on its premises. This will serve as a tool to explore distances which provoke our grasp of the universe and open up wider philosophical discussions.
By definition the word BEYOND lures us to a place that has not yet been arrived at or is further than we can currently reach. Through BEYOND we want artists to take their ideas and their practices to places not yet encountered.

The consensus among our group was that we liked the open ended nature of this brief and were interested by and excited in the projects which had taken place there in the past. After discussion we decided that for our overall theme we wanted to create a fictitious planet which can be both seen and heard through interactive and generative audio and visual elements and experienced through live performance. We decided that it would explore the connection between cultural and mythological ideas of Beyond and the ways in which we learn about and understand Beyond through science.

The crux of our piece will be an installation which presents the generative (or semi-generative) audiovisual elements (the science bit) and is used as a performance space (the mythological bit). The idea being that the audiovisual elements are used as a persistent backdrop for the performance, tying the two together. This will also be viewable remotely from Culture Lab as part of the Late Shows.

For my part I decided to focus on creating a generative audio system which will provide an ambient sound backdrop for both the performance and the installation. This audio system will be designed in pure data and will use semi-generative audio based on information gathered by NASA’s Van Allen probes which analyse cosmic radiation. That information can then be converted into audio, allowing us to perceive that which we ordinarily could not. Our planet is an ‘Earth-like’ planet and so while this radiation will be unique to that system it will be scientifically accurate based upon current knowledge of how the Earth produces and reacts to radiation, electromagnetic fields and so on.

To my knowledge, Pure Data is not capable of sonifying images in a way which is suitable for this project and so I will be using the spectral editor in Camel Audio’s Alchemy soft synth and a freeware program called Paint2Sound.

Here are some of the images I will be looking to use as source material:

Whistler Wave

Electro-Magnetic Hiss

Chorus Waves

Electron Fluxes

HSS 8120 – Tiny Datum – Post 5 – Critical Reflection

HSS 8120, Tiny Datum

Big Data

In order to understand the idea of Tiny Datum we must first understand that which it is a reaction to: Big Data.

Big Data is the term used for the collection and analysis of very large amounts of data. This data can take many forms such as our Facebook interactions, Google searches, online and physical shopping tendencies, call centre conversations and virtually anything else you’d care to imagine. In today’s technologically orientated world information about almost every facet of our day to day lives is collected and stored. The effects of Big Data are most easily and clearly seen when we see targeted advertisements on Facebook and similar platforms. Our Google searches are stored and our shopping history is analysed by advert providers who then show us adverts based upon that history.

“An IDC study shows that in 2010, there have been 1.2 zettabytes (1,200,000,000,000,000,000,000) of information, a trillion billion bytes of information to be managed and analyzed. It is estimated that by 2020, there is going to be 35 zeta bytes of information. For instance, Twitter alone generates more than 7 Terabytes of data every day, Facebook generate 10 TB. From 2010 to 2020, data is to increase 44X from 1.2 Zettabytes (ZB) to 35.2 ZB. Enterprises are facing massive volumes of data.” Shen, Y et al. (2014).

“About 90% of this information being created is unstructured, like website clicks, mobile phone calls, Facebook posts, call center conversations, tweets, videos and emails.” Gens, F. (2013).

Naturally this huge amount of unstructured data can be very very difficult to analyse and produce meaningful results from. This is why the fields of data science and analytics has recently become so important. People who are able to find ways to make sense of and find patterns within this data in order to, say, find marketing opportunities for a supermarket, pitch a product to a certain demographic or improve the flow of foot traffic around a shopping centre can expect to earn large salaries and build successful careers.

The implications of Big Data are vast and naturally the backlash against it is in full swing. Opponents say that storing such a huge amount of data on our day to day lives is an affront to privacy and fear that the information could, for instance, be used to discriminate against us (car insurance companies are known to charge higher premiums to people who drive at night, for example) spy on us or leave us vulnerable to hacking and cyber crime.

So, we know what Big Data is, now what is Tiny Datum?

Tiny Datum

From the assignment brief:

“As a provocation and opposition to Big Data, Tiny Datum is something that displays and interacts with a very small number of data points during its existence. Tiny Datum’s data does not have to be stored or shared, its behaviour can be entirely ephemeral and self-fulfilling. Tiny Datum’s datum can relate to its own data, for example its power usage or temperature, rather than to any economic, governmental, security or scientific concern, as is typically the case in big data research. Tiny Datum should take a minimalistic approach to visualisation, sonification or data analysis.”

Tiny Datum subverts Big Data and points out that “there is more to data than just size”. It relates to the ideas of minimalism and reductivism. Where Big Data seeks to collect and use huge amounts of data, Tiny Datum seeks to engage with the smallest data sets possible and use them in such a way as to show that there is depth and relevancy in even that which could be seen as insignificant and irrelevant due to its small size.

Since we’re tangentially drawing upon the ideas of reductivism and minimalism it might be worthwhile just to spend a moment and understand what we’re talking about when we use those terms. Obviously, they’re both huge subjects and it would be impossible to go into any great depth without writing a few thousand words but I’ll attempt to give a bit of an overview of those topics, specifically how they relate to music and to my project.

Minimalism is a post-WWII art movement that found it’s origins in the United States. It is a movement which emphasises purity as its central conceit, asking the viewer to engage only with what they see as part of the piece rather than relating it to any outside reality. Musicians working within minimalism (including Phillip Glass, Terry Riley and Michael Nyman) seek to break down music to its bare essences and concentrate on the power of the sonic purity rather than the emotions that those sounds may evoke. My piece engages with minimalist elements in that it is centred around sine waves (the fundamental building block of synthesised sound and something which is a fundamental natural constant), with every element of what is quite a large and textured piece deriving from that most simplistic of waveforms. As a composer I use elements of minimalism in my standard studio production based (as opposed to fine art based) practice. I often use simple two or three note melodies, simple chord sequences and spacious, clear drums. This simplicity is then subverted by the deep textures and layers of simplicity that I use to create pieces which sound rich and deep. This is the approach that I have taken with my installation, the juxtaposition between the simple root of the piece and the rich, dynamic outcome. This ties into the Tiny Datum theme because from this simple root comes something large and complex, showing that there is indeed a great deal that can be derived from even the smallest and simplest of data sets.

Steve Reich’s Four Organs. An example of minimalist music. Reich, S. (1970).

Reductivism in art refers to a process rather than a movement. It is a process of simplifying, consolidating and condensing. The images I posted in an earlier blog by the artist Jason Shulman (who condenses entire films into a single frame) are an example of reductivism, as is Kyp’s reduction of The Story of O to punctuation marks and the letter O for this project. Reductivism can be a means through which we can arrive at work which could be called minimalist through the simplification of larger and more complex works. In musical terms reductivism is very much the same, simplifying a piece of music in order to either analyse it, make it easier to play or alter it’s aesthetic in order to engage with it in a novel manner. I am not a technically gifted instrumentalist and so reductivism is often a very useful tool for me. As a matter of fact, I often disengage with music which I view as unnecessarily complex. For instance, as a producer I often work with session musicians if I need someone to play certain parts or certain instruments that I am not capable of playing myself. I always allow them a certain freedom to put their own stamp on the track but there’s one thing I find myself saying to them more than anything else: “Play less”. This is because session musicians, as gifted as they are, will tend to try and overplay a part, distracting from the overall tone of the music by focusing on technical prowess and complexity over clarity. I think by reducing a work we can arrive at its very essence, stripping away the external layers to allow us a glimpse into the very heart of the piece. It is a process which brings clarity.

Jason Schulman’s Voyage de la Lune. An example of reductive art. Schulman, J (2016).

At its core, Tiny Datum is about simplicity. We are working with tools that encourage us to think creatively about how they can be used because of their small size and the fact that those small sizes force us to engage directly with that data set rather than consider any inherent externals. The attendant problem with that is that because we need to engage with something so small we need to find a data set which allows us a degree of flexibility in how that data can be presented. This is how I arrived at the idea of using sine waves.

My Creative/Design Project

As I’ve mentioned previously, sine waves are more or less the simplest form of sound imaginable and are something that is found throughout nature. This fundamentality was appealing to me for a number of reasons: Firstly, the fact that something so simple could be so important that it is central to so many fields (including physics, mathematics, engineering, geometry and of course music) shows that it has a significance which belies its simple form. Secondly, as someone who has long been a user of hardware and software synthesisers I have a familiarity with the sound of a sine wave, its uses and its potential. Thirdly, it was simply the most basic sound I could think of which would give me the flexibility I would need to create something which (I hope) is interesting to listen to and diverse enough to function as a coherent piece of music.

I have covered the installation itself in other posts so I won’t go into that again here but there are one or two other issues that have arisen that I would like to address.
The interactive element of the installation has changed somewhat since it was originally conceived. I was intending it to be a work which was wholly reactive and would change very dynamically based upon the position and movement of the user. This turned out to be impractical due to the rapidly changing effects parameters causing unwanted sonifications. Instead, it has been imbued with something more like a sense of memory. With five second timers on the ultrasonic sensors it, in a sense, remembers where you’ve been within the space and changes the music accordingly, similar to the History Tablecloth (Gaver, W et al. (2006)) which John was involved with. As the user of that piece said: “It’s like a cat. You can interact with it but you can’t tell it what to do”. It encourages and responds to interaction without over-responding. It encourages slow movement around the space which allows one to soak up the ambiance and really listen to it but still feel as though they’re part of the piece and having an effect on how it performs itself. It makes the user feel as though they are inside and part of the music.
The main theme is really one of showing that something very large and interesting can be created from something very very small and potentially insignificant. That’s the reason I chose to use an octophonic setup with four computers. It also creates a nice visual symmetry which I found quite pleasing.
I have found that by switching off the lights I can achieve quite a nice ambiance and having the piece lit by the computer screens, flashing Arduino lights and speaker lights alone creates a nice effect and allows one to focus more on the sound. This was influenced by performance by John Cage et al where they had high tables with pools of light underneath. I also decided to show the Pure Data patches onscreen as a way of further integrating the audience with the piece and making one feel as though one is inside it and part of it, showing the guts if you like. I’m happy with this visual dynamic overall and I think it ties in to the theme of minimalism to the extent that it is not overthought or overdesigned, it just presents you with its essence.

My installation under construction.

My Presentaton

My presentation continues with the sine wave theme and functions in a similar way to my installation piece: An interesting way to present something that is very very simple. I have opted for a purer approach with this in that the sounds that I am using are recognisably sine waves or derived therefrom (the installation piece uses very heavily effected and layered sine waves to the point where, while you can tell from where those sounds are derived, it is not particularly overt). This is because I wanted to present the idea in a purer form and wanted to show how sine waves can be used by themselves to create something that is greater than the sum of its parts.

Citations

Gaver, W et al. (2006). The history tablecloth: illuminating domestic activity. Proceedings of the 6th conference on Designing Interactive systems. University Park, PA, USA.

Gens, F. (2014). Competing on the Third Platform. [online] Available at: http://www.idc.com/ research/Predictions13/downloadable/238044.pdf [Accessed 18 Jan. 2018].

Reich, S. (1970). Four Organs/Phase Patterns. [Vinyl] Paris, France: Shandar.

Shen, Y et al. (2014). Big Data Overview in Y. Shen et al (eds): Enabling the New Era of Cloud Computing: Data Security, Transfer and Management. Hershey> PA, USA.

Shulman, J. (2016). Photographs of Films. [online] Available at: http://www.jasonshulmanstudio.com/photographs-of-films/ [Accessed 18 Jan. 2018].

HSS 8120 – Tiny Datum – Post 3 – Oral Presentation

HSS 8120, Tiny Datum

For this task we were asked to put together an oral presentation based on the Tiny Datum theme outlined in my previous posts. I decided to use this as an opportunity to do a live performance based on the sine wave theme I decided upon for my design/creative project.

“This presentation should not be ‘about’ your project (i.e. it shouldn’t be an explanation of what you did) but should be its own piece of creative work. It might be that you use the same data source as your design/creative project and do something different with it. It might be that there’s no relationship at all. Above all we want to avoid conventional power point presentations and think about interesting or unusual ways of talking, showing, playing or otherwise responding to the theme.”

This probably gave me more headaches than the design/creative project because I am not really a performer by nature and we are essentially being asked to put together a short performance (in a broad manner of speaking) based on this theme. I considered a number of options, particularly inspired by the work of Jason Shulman, a photographer who produces images which consist of entire films condensed into one frame producing beautiful abstract images. I decided to try a similar idea with music and shorten well known pieces of music to around ten seconds or so. This didn’t go terribly well as you might imagine and I just ended up with some rather nasty bits of noise. Needless to say I binned that idea.

Here are some examples of Jason Schulman’s work.

Alice In Wonderland

The Shining

From there I considered ways in which I could explore the sine wave theme in a live performance context. My natural inclination for that was to go to Ableton Live. Ableton Live is a music production suite that can also be used for live performances. I have some experience using Ableton and have performed with it in the past so it was natural for me to go to that. Essentially what I’ve done is use a suite of music production tools to turn some very very simple sounds into something that is much more interesting to listen to and can be layered in such a way as to produce a live performance.

The first sample I used is one cycle of a 440hz sine wave which produces a simple click. Clicks are very useful in sound design as they allow us to create percussive elements because of their staccato nature.

The second sample I used is simply a longer version of the sample above. A 1 second burst of 440hz sine wave.

Here’s a screenshot of the Ableton Live setup itself. Note the single samples in each channel.

The primary problem I encountered was, unsurprisingly, the simplicity of the sine wave and the inherent difficulties in getting variety and depth into the piece. I was able to get around this by using automation, pitch shifting, filter delays, modulation effects, reverbs and such like. Each channel is set up with only one sample and the idea is that they can be layered and their parameters tweaked live to create what I hope will be at least a somewhat compelling performance. I’ve rehearsed it a few times and it seems to work so hopefully it’ll be ready for the 19th.

In my final post on Tiny Datum, which should be up towards the end of next week, I will be posting audio and footage of both projects at work and reflecting critically on the project and its themes.

HSS 8120 – Tiny Datum – Post 2 – Design/Creative Project

HSS 8120, Tiny Datum

This post deals with the design project part of the Tiny Datum assignment. Initially I was planning on producing a piece of music which uses only sine waves and processed sine waves as the building blocks of the track. After working on that for some time I found that while that idea is practical to an extent it was not giving me the sort of results that I was after and so I decided to change it up and turn it into a sound art installation.

The reason that I found it didn’t work for producing a piece of music in my usual style is that sine waves are simply too simple, too clean and do not give me the kind of dense and complex harmonics that I usually use as part of the production process. When you’re limited to one type of sound, even with a full suite of production tools and FX to process that sound it seems that you naturally end up with a sound which is very one dimensional. Sine waves in particular, with their very pure properties mean that the outcome of the production process is one which is very clean and even though complex when processed and layered can sound very simple and basic. This is not conducive to the sort of music that I make and enjoy making and as a result I have decided to adapt it into something more suitable.

As I mentioned in my previous post I am planning on presenting my work as a sound art installation over 4 sets of stereo speakers and using ultrasonic sensors to add interactive elements. I have adapted the music so that it has an altogether more ambient feel while maintaining certain rhythmic elements and a sense of meter. This is much more suitable as a way to present sine waves and with the interactive elements should make for an interesting piece of work.

In terms of interactivity I have put together 4 patches in Pure Data which consist of audio effects controlled by a sensor via an Arduino. The idea is that the closer the listener is to the speakers (which will be arranged in a circle as large as the space (hopefully the sound studio) will allow) the more higher the number being fed to the FX unit and thus the greater the depth of the effect. Here are some images of the Pure Data patches that I’ve made for this.

Vibrato Effect

Reverb Effect

Delay Effect (adapted from code found online)

Chorus Effect

Here is a sample of the music. This is still very much a work in progress and the finished piece will be longer and have more variation. Please also keep in mind that this is not intended to be presented as you’re most likely listening to it; on one set of stereo speakers. There are 24 channels on this track, each containing different elements of audio. When this is presented properly it will be as four separate audio files, each file containing 6 of the 24 channels. The idea being that you can walk around ‘inside’ the piece and become a part of it depending upon how you interact with the sensors.

I will be posting another blog entry once the project has been fully assembled and tested which will hopefully contain video, finished audio and some critical analysis of both the project and the Tiny Datum theme itself.