mac online apple blackjack

Affordance, Appropriaiton and Design by M. T. Schaefer

“Appropriation means that users integrate technology into their everyday practices, adapting and sometimes transforming its original design. It covers the use, the modification, the reuse and further development of artefacts in ways often un­ foreseen by the original designers (Dix 2007). Reacting to the initial design of an artefact and changing it according to other needs has been described as a common consumer and user activity (Pacey 1983). The material aspects of Internet culture and the effective possibilities for collaboration have only aggravated this practice on a global scale. Appropriation is related to affordance, because the material characteristics and the design choices affect the act of appropriation. Design and the specific material qualities form the basis for use and appropriation.
Figure 2. Affordance, appropriation and design.
As shown in figure 2, affordance, appropriation and design are interdependent. Affordance exists in both, namely the specific material features used for design, and in the design process, which also constitutes affordance. Design is the formalization of anticipated user activities through the use of certain materials or technologies and the shaping of these into artefacts that constitute the designated affordances. The challenge for design is to employ material characteristics accordingly. A pro­totypical example of contradictory design will be presented in the case of the Microsoft Xbox, a game console that actually had the typical characteristics of a per­sonal computer but was limited, due to its design, to the functionality of a game console. Users hacked and modified the game console in ways unintended by the vendor. Microsoft learned from these acts of user appropriation and formalized several aspects into the design of the next game console, the Xbox 360, aiming to include several forms of game console use and attempting to exclude others that were more efficient than the older design. The labour of user communities, their innovations and their way of using a device were then formalized into new design decisions and therefore implemented in further developments. During all stages of development, the involved participants can be professional designers employed by a company, individual users, a collective of enthusiastic students, or a user commu­nity, a team of hackers and so on; all of these participants are users and producers.”

Quoted from

Schäfer, M. T. (2011) Bastard Culture! How User Participation Transforms Cultural Production. Amsterdam University Press. pp. 19 – 21

Links +


I have been collecting links to smart textiles and textiles technology so the following is going to be the main sites of interest with a brief description and any particular sections that interested me. Rather than making a new post each time I find new sites I will just keep this post updated.

This site speaks of medical designs for new medical devices, the page above directs to ‘An Introduction to Designing With Nitinol’.

Medical design directs to this site for manufacture.

An interesting exploration into the idea of ‘difference’ and ‘same’, the theory being to understand difference you must study what is same to know what is different. Experiment compiled from marathon monks and the hat they wear when taking their ‘same’ route.



A more recently discovered material, strong, electrically conductive and with a multitude of interesting properties..

A good short piece describing the form of graphene that’s antibacterial paper able to be mass produced.

Part of a study about the antibacterial strength of graphene


Cuts up text and re orders it, could be interesting.

RFID chips in jeans watching your movements

BBC article on above.




A good blog I put in the bracket of ‘weeklies’ as I try and check them often.

future textiles!

It uses 40 litres of glycerol and over kilometres of plastic tubing.

Colours pulse through the tubes at different speeds giving the appearance of a dynamic skin that breathes and pulses across the landscape of the body.

robots and such

3D printer group

This site is full of buyable materials and some sample packs, its aimed at schools and colleges and has some very interesting materials. Amongst the interesting ones are the temperature sensitive fabrics and wires. Almost every item has a youtube video embedded on the page to show you the item in use.

-I have ordered some sample packs and thermal dye and will comment further once Iv experimented.

Solar active thread could be good.

Metalized thread on this link, however the site specialises in threads, finishes and surfaces. The materials sound good yet to obtain and test.(try to get free samples!)

^ Has moved to plug and wear.

Various textiles tech bits n pieces, lilly pad, conductive fabrics, lights etc. also has some tutorials for getting started with the items they sell.

Wearable technology Italy, small site.

This link goes to the textile circuit’s page, there are 4 interesting textile with integrated technology fabrics, a few that are very relevant to my glove project, however because of the designs they could be complex to integrate with the silver. However I will look into this further.

Seems to be in the research area and keeping up with it,

Blog contains interesting pieces, some of which I havnt come across before.

magazine about new materials

culture magazine

NEWS UPDATE: The Materials Library is expanding its remit to become the Institute of Making…’

‘fresh works from leading creative professionals’

Various materials

The Bureau of



Art & Design




interactive printed items

CRG Industries is a manufacturing company for commercial, defense, medical, and industrial markets.

liquid crystal polymer fiber

solar panels


A global industrial manufacturer of high quality precision coated films and blended liquids for use in the printing, automotive and electronics industries.

(English section under construction)

I hope they make glasses squeak when they hit ppl…



Page 59, temperatures effect on skin, factoring wind chill, eg.

this goes on to freezing skin temps and materials effect in %.

another smart clothing book

Article on tagging jeans and more


COPPER, medical side

Penfield Mood Suite: Signal Visualizer User Interface

Whilst developing my audio visualizers i became increasingly interested in integrating various additional controls. This arose partly through the programming and experimentation as well as my experience when testing them in a live environment. Whilst experimenting with various drawn shapes, I found that it was possible to produce a variety of different visual representations by merely changing the darn shape from an ellipse to a rectangle, or even using more complex geometric patterns. By integrating a number of booleans into the code, with key coded ‘toggles’ to activate and deactivate them, I was able to switch various visuals on and off, allowing me to use multiple variations in the visuals utilising the same basic structure. This provided an additional degree of interactivity whilst also broadening the capabilities of the software.

In terms of the coding, the use of boolean controlled ‘if loops’ was not particularly problematic. What became increasingly difficult was finding a way of mapping the functions to a control interface. As I was using a laptop for development and performance and given my experience with ASCII coding in MaxMSP, i decided to use the integrated keyboard and touchpad largely for convenience. Although this was initially pretty good, as the number of functions increased, so too did the complexity of the control system. This was not such a problem for myself, but as I intended to display this software for public use and interaction, I, decided I needed to come up with a new, more convenient system for user interaction.

I looked around at a variety of different available control systems, looking for small, convenient, integrated systems. After a bit of browsing, I managed to find a USB calculator with an integrated touchpad. As I had initially mapped many of the functions to the numerical keys on my keyboard and the mouse location, a port wouldn’t be too difficult. However, closer inspection of the interface revealed a small flaw. It was a case of using either the mouse or the keypad, with a button being used to switch between the modes. Although not a massive problem, I did find that in tests users often needed reminding  that they could use both functions.

Given the multimodal emphasis in my research, as well as the suggestions on multimodal interface design, I felt that I needed to provide a greater degree of choice for the user interface design. I looked at other products that could supplement or replace the key/touch pad. One item that caught my attention was a trackball and scroll ring. Although I have not used trackballs or scroll rings much in the past, the haptic/enactive nature of the piece suggested that it could be appropriate as an interface whilst still be similar to the widely used mouse. Although the trackball did not come with keys, I was able to map the scroll ring to scroll through the various visuals by developing on the Processing mouse wheel code by Rick Comanje

By using the data from the wheel to increase and reduce a variable integer, and assigning a number of loops to use the varying integer data to toggle visuals on and off, I was able to create an interface for the sketch that could be used to scroll throughout he various visual options. Here is the principle in pseudo-code. I decided to have a degree of overlap in the 12 functions as well, but the principle is similar:

You have 4 functions;
Scroll up/left returns -1, down/right  returns +1;
variable int (x)
x = 0 + data from scroll
if x = 1{ boolean toggle function 1 = on
if x = 2{ boolean toggle function 1 = off, 2 = on
if x = 3{ boolean toggle function 2 = off, 3 = on
if x = 4{ boolean toggle function 3 = off, 4 = on

-back to the beginning/end
if x < 4{ x = x – x
x = 0
if x > 0{ x = x + 5
x = 4

Using these two interfaces, I have enabled users to choose how they will interact with the system, particularly whether they interact in a symbolic or enactive manner. As the two systems have similar responses (the touchpad offering more options for signal visualisation) the visualisation system can be operated almost fully (save for the initial signal input) from either of these two interfaces. With some modification to the keypad itself (covering the surface with a printed acetate sheet representing the different functions), my intention is to make the system even more intuitive. With additional control diagrams built into the installation, I believe that the control interface I have designed should be suitably interactive.

Penfield Mood Suite: Biofeedback Headset Design

After some initial testing with the BioWave headset, it became apparent that I should be using a bluetooth serial connection rather than connecting the arduino to the computer via the USB port. The BioWave instructions suggest that I should use the sensor exclusively with their adapter. However, I thought that the price of this adapter was too high to be viable for use within this project. I also thought that since the software is based in Max, I would likely be able to develop a similar system in Max and Arduino. As the system is bluetooth based (to avoid distortion of the signal resulting from the mains power supply as well as the potential risk of user electrocution) I decided to build a similar, battery powered bluetooth system based on the Arduino BT. I had based much of my initial headset development via a USB serial cable, developing on the Arduino2Max code and patch:

To develop the bluetooth system, it was merely a case of uploading the sketch to a bluetooth Arduino and aping the device with my computer. Data from the Biowave could then be manipulated in either Max MSP or Processing.

However, further working with the project I found that it was difficult to analyse the raw data from the sensor. Much of my work with data visualisation has been based in FFT analysis of an audio signal. Due to time constraints and the relative ease and convenience, I used the data from the sensor to modulate an audio signal in Max MSP. As I was interested in working with the raw data, I decided that the signal should be used to modulate the frequency of a bank of oscillators, similar to the wavetable synth used in the Binauaral/Monaural Beat Generator. The audio from this biosignal controlled synth could then be routed internally using soundflower, and visualised in Processing like any other live audio signal. I decided that the user should not be able to hear the signal, largely due to the irritating noise it makes which could be seen as off-putting. I felt that this would be out of place given the immersive and dissociating nature of the piece.

In constructing the headset, the sensor was mounted into the frame of a pair of modified toy night-vision goggles. The electronic components were harvested from the toy for future use, before removing sections of the frame to enlarge the headset for adult use. The frame was kept in place (save from some trimming) as well as the elastic fitting strap. After giving the goggle frames a coat of matt spray paint, the sensor was glued in place with the electrodes fitting the brow of the wearer. The board, battery pack and excess wires were placed in a zip-up wallet with a reasonably rigid frame (bought from Pound Land), which was then glued to the side of the headset with a hot glue gun. This allowed easy access tot he board to turn the battery pack on and off and to replace dead batteries.The resultant headpiece makes for quite a convenient biofeedback interface for use within the piece. The bluetooth connection allows the headset to be used without wirelessly, whilst the whole thing is sufficiently small to be used alongside headphones. The whole unit (consisting of the frame, sensor, board and battery pack) is pretty light and easy to use.

Penfield Mood Suite: Bioofedback/Audio Visualisation Software Development

One of the first things I really got into in Processing was writing audio visualizers. This arose from an initial experiment into audio waveform rendering, which led to further experimentation with realtime audio visualisation using the techniques discussed by Antony Mattox. At the same time I developed on the techniques explored by Dan Shiffman in Learning Processing. Shiffman’s book explores many areas of text based programming for visual design. However, it doesn’t deal with audio visualisation using FFTs, the topic that I wished to focus on interest multimodal feedback systems. During my research, I have attempted to apply the Mattox code (which, as he suggests, can be used as the foundation of many systems for audio visualisation) to the Shiffman examples, working through the book, using selected exercises from the book to build a variety of audio reactive visual programs.

There is a clear artistic and commercial interest in the use of audio-responsive visual software, evident in my ability to get a number of gigs as a VJ (despite how much I loathe the term). These initially started out at house parties and exhibition after parties, followed by club nights and more commercial jobs. Using these gigs, I attempted to find out what audiences and performers found appealing in such software as well as what they expected from them. I have attempted to apply the findings from my research to my final chosen method of data visualisation. These expectations and appealing elements are discussed below.

The audio data should be the dominant variable in the sketch. Even if other controllable variables are to be used, audio data should be applied to them in some form. The audio spectrum data, which can be retrieved via fft.spectrum[], can easily be applied to any variable: the size of geometric shapes, the increment by which an object moves, or simply the colour data of the drawn image. The spectrum can also be used to affect image tinting, contrast, brightness and gamma levels through the image.adjust library. This idea has been central to my design and development process, attempting to apply this spectrum data to as many variables as possible within the sketch, making the final program as audio responsive as possible. Although much of my visual software includes other control interfaces (mouse, keyboard), these functions are included to offer a degree of user interaction and control. This originates form my own experience using the software in a live capacity, as I wanted to be able to switch between various modes without closing the program. However, these controls are largely used for such a process, with the audio data affecting most elements of the sketch.

The visualisation should be highly audio responsive and operate smoothly. The system of audio visualisation, when being used in realtime, has to run smoothly in realtime. Although a bit of lag is maybe not so noticeable to the general public, it is very noticeable to musicians and performers. For use alongside live audio, the software must respond quickly for optimum effect. As a result, various labour intensive elements of the sketch should be reduced, such as the number of audio frequency bands that are to be visualised. Alternatively, the software can be run on an increasingly powerful computer with plenty of RAM to allow more frequency bands to be visualised.

The system should be visually interesting, but still readable. Much of my experience working with this particular software has shown such linear systems as the colour organ to still be popular for audio visualisation. Those systems based on Shiffman’s nested push and pop example have proved popular and highly visually engaging. However, any attempt to analyse the audio data in such a manner is difficult. The audio visualisation is based more in movement than in geometric shapes corresponding to different frequency bands. Similarly, those visuals based in the adjustment of a pre-rendered image are difficult read as the audio is not so much visualised as the audio data used to adjust another visual image. . In contrast the familiarity of the colour organ model has proved easier to read, amongst general audiences and musicians alike.

In designing the final visualisation method, I tried to use these findings as well as further user & audience feedback to develop the system. The audio data is applied to a broad number of variables that affect colour, shape size and layout. In order for the visual software to run smoothly and to filter out those frequencies where there is little data for visualisation, I have significantly reduced the number of frequency bands. In terms of the general visual form, I have taken the linear method of audio visualisation modelled on the colour organ from Mattox’s audio visualisation code. I have then applied it to a variety of different methods detailed in Shiffman’s work, such as the wave, and experimented with these forms. The result is a relatively familiar system for audio visualisation that offers multiple ways of visualising the data within this form. As a result, the user is offered plenty of choice in how they wish to visualise the data.

The final visualisation can be seen as highly derivative of the colour organ model. However, this accessibility proved highly popular with test audiences and users, whilst the varieties of data presentation in the linear form allowed for a wide variety of different visual objects. The software went through multiple rewrites and updates before it reached its final form, so there is some cleaning and labelling to be done. I chose to use a variety of organic and inorganic looking functions, based on waves, double helixes and the colour organ. Using the booleans and mapped keys, users can interact with the software, turning various shapes on and off as well affecting the plotting of the data and the colour.

Joseph Pochciol 2011-09-30 17:26:26

In designing this piece, I looked at a variety of different audio pieces that would be effective when played via the chair. Having built the chair quite some time ago, I was able to integrate it with my own audio playback equipment, allowing me to test the system alongside various pieces of audio-visual media, particularly music, films and computer games. Although I found a number of interesting audio pieces that I could use, there were also a number of issues that arose through this process. The main issue was the highly subjective nature of audio and music taste. The deep bass of dub reggae was highly effective, producing a variety of interesting responses from the shaker. However, were the potential user not a fan of dub reggae, then their enjoyment of the piece would likely be decreased. An alternative setup could include a 3.5mm jack, feeding an audio signal to the chair from the user’s own media player. However, there were also issues with such an approach: being unable to control the quality of the audio signal being played on the system, I would not be able to guarantee that the response would be effective.

The idea of using another composer or producer’s work was also somewhat fraught with problems. Alongside a selection of modern electronic and dance pieces (which utilise large amounts of bass), I looked at using a number of classical compositions. I had looked at using a number of pieces by Beethoven, whose limited hearing was of interest to me: I had wanted to see whether Beethoven’s later compositions (particularly those composed when his deafness was at its height) made particular use of bass tones, as it is likely that he would retain an ability to perceive these low frequencies. The use of these pieces, particularly from Beethoven’s 5th and 9th symphony, would also arguably make the piece more accessible due to the familiarity of the pieces and their motifs. However, I felt that using the system to play back a piece of music written by someone else was somewhat inappropriate. Although the system is more than capable of reproducing such audio pieces, a number of test users suggested that I should compose and produce my own music for use on the chair. I was eager to do so, as such an approach appealed to my interest in audio composition ad production. However, due to the time constraints in the production schedule I felt I should postpone such a production, and not use it for this particular installation.

As interactivity and human-computer communication are central to the piece, I felt that I should in some way enable the user to decide on the signal that is to be played via the system, or at least be able to manipulate that signal. Whilst discussing the project with Adam Parkinson, Adam suggested that I should construct some form of synthesiser for use in conjunction with the system, allowing users to generate and modulate a tone for multimodal playback. I had initially thought of using a variety of low frequency and infrasonic samples in my composition, obtained from various field recordings including engines and large animals, before detuning these samples by various octaves in order to lower the frequency of the audio signal. However, for the purposes of this installation I have chosen to build a simple digital subtractive synthesiser. I chose to use subtractive synthesis based on waveform generation and modulation following discussion of the project with music producer and DJ Mark Lowry, who first taught me music production many years ago. It was suggested that synthesis would provide a ‘purer’ form of sonic energy, particularly when working with infrasonic audio, maximising the clarity of signal feedback.  The synth features include a number of oscillators for the production of a variety of different waveforms (cosine, rectangle, triangle and sawtooth) as well as a filter with adjustable cut-off frequency and resonance. The synth also includes a binaural beat function, with which the user is able to create a binaural/monaural beat effect as well as modulate the frequency of the beating.

This particular setup was chosen for a number of reasons other than those already mentioned. Firstly, the user controlled frequency and waveform amplitude enables the user to effectively use the dials to ‘sculpt’ the waveform, allowing the user to modulate the signal that we will eventually hear, see and feel. Secondly, research into AVE (audio-visual entrainment) has suggested that exposure to binaural beats can have an entraining effect on the user’s biosignals. Given the use of realtime biofeedback monitoring equipment in the piece, these apparent effects of audio/visual/tactile signal should be visible via the biosignal visualizer, allowing for monitoring of potential trimodal entrainment. Thirdly, the pulsing sensation produced using the system was apparently familiar to many test users, resembling the sound of an engine turning over as well as the distinctive LFO produced bass ‘wobble’ that is frequent in a lot of modern electronic music (particularly common in dustup and drum and bass, pieces from such genre being used in many of the system tests). Finally, the ability of the user to effectively sculpt their own waveform for playback bore some resemblance to the Penfield Mood Organ form Phillip K Dick’s Do Androids Dream of Electric Sheep, after which the piece is named. Using the Mood Organ, characters from Dick’s story are able to dial a particular frequency wave which elicits a particular emotional response. Given my interest in measuring user biosignal response to a particular signal, such a system for signal production and modulation seemed ideal.

In constructing the user interface for the waveform generator, I decided to use a simple, seemingly intuitive interface in the form of the Akai LPD8 midi controller. My experience using this particular interface in the AAMP project suggested that the simplicity of the interface, utilising 8 pads and rotary dials, was appealing to users, encouraging interaction with the system using it as an interface. The 8 dials are used to set the carrier frequencies (via the left and right audio output channels), the frequency of the binaural/monaural beating (determined by adjusting the difference between the two waveform frequencies), the cutoff frequency and resonance of the filter, as well as the amplitude of the 4 different waveforms. The pads were set to also adjust the frequency of the wave, but in a manner more familiar to traditional audio reproduction devices and instruments. The 8 pads, when pressed sequentially, produce an octave scale starting and ending at C. This is to highlight that the system is essentially modelled on a musical instrument, much like Dick’s Mood Organ. The instrument/device also includes a graphical interface (GUI), providing visual information on the waveform using a number of oscilloscopes and number boxes, as well as a visualisation of the audio signal in a similar manner to that used to visualise the biosignal. Using such a system of monitoring, the aim is to provide users with two similar graphic visualisations, allowing the user to compare their own biosignal visualisation with the visualisation of the audio signal. Using such a system, the aim is to enable participants to observe any entrainment that should occur with relative ease as they are able to refer to both visualisations and compare.

Penfield Mood Suite: Fitting the shaker to the chair

To fit the shaker to the chair, I went about designing a fixing bracket. At first I had thought about modifying the original bracket that came supplied with the chair. However, fitting this particular bracket proved problematic. Firstly, sections of it would have to be removed, which would require significant cutting of the bracket at rather uncomfortable angles. Secondly, to fit the bracket to the chair I would have to screw or stud the bracket into the wooden chair, reducing the structural integrity of the chair. As this fitting would later support the vibrating shaker, this integrity would be further reduced as time went by. Eventually the bracket would likely shake free from the chair, and would not be able to be refitted due to the damage to the wood.

I spent some time discussing the project with (Steve Rowland?) at the Fine Art metal workshop, and with his help designed an alternative system with which to fit the shaker to the chair. This involved two identically sized, square steel plates with corresponding holes. One of these plates was then drilled with further holes corresponding to the screw fittings on the Buttkicker. These plates then fit together using 4 large bolts, sandwiching the centre of the cone. The shaker was then fitted to the outer plate via the previously mentioned holes. The system effectively creates an audio-responsive vibrating pad at the cone’s apex. When the cone is suspended from the ground via the chair’s base, the shaker is then free to vibrate, with these vibrations being conducted via the chair’s cushion to the user. The system is quite effective for listening to music, effectively serving as a silent subwoofer. The unit makes very little noise, save for the resonance of the bamboo: however, the shaking can be felt significantly through the floor when not sat in the chair. This can have the effect of conducting the audio signal to whatever should be in close proximity, which can result in some interesting audio effects.

Low frequency tones seem to be best conducted at the apex, with the sensations moving outwards from the apex relative to the pitch of the tone.  Should the user extend their hands and hold on to the outer rings, or lean back bringing their neck or skull into contact with the outer ring, then the sensation is even more pronounced, with many test users claiming they could feel the sensations more clearly and in some cases throughout their entire body. This was an interesting effect, given my interest in bone conduction for audio reproduction. The chair could be seen as working similar to the Baha implant, a ear implant for the deaf that operates on a similar principle. Using the implant, audio signals are conducted directly via the implant into the bone of the skull, effectively bypassing the traditional system for audio perception. The `cochlea implant seems to function in a similar manner, with an electronic audio signal being conducted via the tissue in the ear rather than the traditional audio transduction process in human perception. These ideas of mechanical human augmentation and methods of effectively cross-wiring, hacking and short circuiting traditional methods of sensory perception were very interesting given my interest in transhumanism.

The chair can be seen as a caricature of the Baha implant, subwoofer or loudspeaker, and with its complementary lighting system based on a TiP31 transistor, provides two sensory modes of feedback: tactile and visual. An electrical audio signal is transduced into kinetic energy via the power amp and bass shaker, whilst also modulating an electrical signal via the transistor affecting the power sent to the LEDs, As a result, the lights can be seen to pulse relative to the beating of the binaural/monaural wave. I had intended for the signal to respond in such a manner, given my reading into flicker stimulation and early brainwave entrainment experiments. This direct relationship between sight, hearing (via the headphones or resonance of the parts of the chair) and touch creates a highly immersive environment, intended to dissociate the user from other stimuli and encourage them to concentrate on the signal played via the system.


Behind the scenes…

Breathing pipe