— Edmund Nesveda

Archive
June, 2015 Monthly archive

Further testing with Kinect and PD. I used two participants,  user tracking with pix_openni and expr object to control a synth. My approach is to start simple, test thoroughly, then try more complicated setups. Unfortunately due to the narrow space, moment was constricted especially on the x scale, for future tests I will look for a larger space and maybe three participants.

Read More

 

OpenNI supports the output of 24 different joints. The NiTE middleware skeleton tracking supports just 15 joints. The skeleton output of will therefore have additional joints with duplicated coordinates.

Here’s a list of all the joints available for tracking:

1 /skeleton/joint/head 5 0.376254 0.158162 1.31012 1
2 /skeleton/joint/neck 5 0.379469 0.300094 1.35346 1
3 /skeleton/joint/torso 5 0.381939 0.416454 1.3511 1
4 /skeleton/joint/waist 5 0.381939 0.416454 1.3511 0 (duplicate! not valid) 5 /skeleton/joint/l_collar 5 0.381939 0.416454 1.3511 0

6 /skeleton/joint/l_shoulder 5 0.442317 0.298091 1.39435 1
7 /skeleton/joint/l_elbow 5 0.478067 0.420739 1.47322 1
8 /skeleton/joint/l_wrist 5 0.478067 0.420739 1.47322 0 (duplicate! not valid) 9 /skeleton/joint/l_hand 5 0.502907 0.580862 1.37264 1

10 /skeleton/joint/l_fingertip 5 0.502907 0.580862 1.37264 0 (duplicate! not valid)

11 /skeleton/joint/r_collar 5 0.502907 0.580862 1.37264 0 (duplicate! not valid) 12 /skeleton/joint/r_shoulder 5 0.316621 0.302097 1.31258 1
13 /skeleton/joint/r_elbow 5 0.291915 0.431105 1.37859 1
14 /skeleton/joint/r_wrist 5 0.291915 0.431105 1.37859 0 (duplicate! not valid) 15 /skeleton/joint/r_hand 5 0.243468 0.58301 1.26445 1

16 /skeleton/joint/r_fingertip 5 0.243468 0.58301 1.26445 0 (duplicate! not valid

)
17 /skeleton/joint/l_hip 5 0.424873 0.531524 1.37506 1
18 /skeleton/joint/l_knee 5 0.431999 0.783388 1.37493 1
19 /skeleton/joint/l_ankle 5 0.431999 0.783388 1.37493 0 (duplicate! not valid) 20 /skeleton/joint/l_foot 5 0.425306 0.991183 1.59826 1
21 /skeleton/joint/r_hip 5 0.343947 0.534104 1.32241 1

22 /skeleton/joint/r_knee 5 0.3335 0.777346 1.32825 1
23 /skeleton/joint/r_ankle 5 0.3335 0.777346 1.32825 0 (duplicate! not valid)

24 /skeleton/joint/r_foot 5 0.348461 0.954826 1.55574 1

 

M. Kronlachner, „The Kinect distance sensor as human-machine-interface in audio-visual art projects“, project report, Institute of Electronic Music and Acoustics, University of Music and Performing Arts, Graz, Austria, January 2013.

Read More

Building on previous research, I found this article on posture tracking and emotion recognition, using a Kinect depth camera. This may solve the problem identified previously, eliminating the need for direct connection between user and computer through a pulse sensor  or GSR sensor, allowing for a more natural interaction.

“Intelligent User Interfaces can benefit from having knowledge on the user’s emotion. However, current implementations to detect affective states, are often constraining the user’s freedom of movement by instrumenting her with sensors. This prevents affective computing from being deployed in naturalistic and ubiquitous computing contexts. ”

“In this paper, we present a novel system called mASqUE, which uses a set of association rules to infer someone’s affective state from their body postures. This is done without any user instrumentation and using off-the-shelf and non-expensive commodity hardware: a depth camera tracks the body posture of the users and their postures are also used as an indicator of their openness. By combining the posture information with physiological sensors measurements we were able to mine a set of association rules relating postures to affective states. ”

 

 

 

posture combination

 

“An analysis of the user evaluation showed that mASqUE is suitable for deployment in ubiquitous computing environments as its rich, extensive range of emotion representations (i.e. affective states) is able to inform intelligent user interfaces about the user’s emotion. This is especially important for evaluating user experience in ubiquitous computing environments because the spontaneous affective response of the user can be determined during the process of interaction in real-time, not the outcome of verbal conversation. ”

Chiew Seng Sean Tan, Johannes Schöning, Kris Luyten, and Karin Coninx. 2013. Informing intelligent user interfaces by inferring affective states from body postures in ubiquitous computing environments. In Proceedings of the 2013 international conference on Intelligent user interfaces (IUI ’13). ACM, New York, NY, USA, 235-246. DOI=10.1145/2449396.2449427 http://doi.acm.org/10.1145/2449396.2449427

 

Read More

Multi-user interaction

Still with Tina Gonsalves and the “Feel” series, it’s interesting to see how it evolves in time, starting with single user interactive installation and ending with multi-user collaborative interactive installation Interesting trajectory, worth considering in my dissertation research and preparation for an interaction workshop.

Tina Gonsalves, The “Feel” series: An Overview:

The “Feel” series (2005-2007) are an interconnected progression of short films and interactive sketches aiming to sense, translate and provoke the psycho-physiology of the audience.

Synopsis of “Feel Inside”:“FEEL: INSIDE” is a psycho-physiologically responsive video installation synthesizing art, neuroscience and technology. The project explores new, more embodied languages of active and emotional communication, investigating the inter-relationship of the internal body and the external world. The audience enters a darkened room. The audience is presented with a large video pro-jection of the artist’s face. A camera sensros the movement of the audience. The emotional expression of the artist’s face is reflective of the audience’s emotional state, appraised in a general way by audience movement. If the audience is calm, her face becomes still and meditative. Alternatively, anxious movement of the audience is mimicked in the anxiousness of the artist’s face. Over time, continued anxiousness leads to the artist crying, and the ence is confronted with a private and intimate moment usually only privy to close friends family. The audience realizes they have caused the upset, but then are asked to work out how to cure it.

Building of “Feel Inside”: With the second prototype, a focus was to move beyond the viewer being dressed with sensors and directed to sit in a chair as they were in “Feel Trace”. To capture a more ‘naturalistic interaction scenario, I wanted the audience to move freely with in the gallery space. After the potent images of “Feel Trace” I aimed to develop softer, more emotional and poetic audiovisual content, hoping this would elicit more empathy from the viewer/s.

The use of movement to trigger video in interactive design has been used quite often in the past. As a result of this, today, at interactive exhibitions, you can often see participants waving their arms around wildly (in non-naturalistic ways) in order to interact with the work. However, the interaction design of “Feel Inside” asks the viewer to be still. I was suggesting that by being still, the viewer may develop a sense of ‘presence’ both to themselves, and to the environment. Presence has been defined as the sense of being present in a particular environment,[i] often divided into ‘subjective presence’ and ‘objective presence.’[ii] With my experience of meditation and yoga, stillness allowed ‘a sense of presence’ to myself and to the world around me. It was in this state, that I felt more ‘empathic’ with others. I wanted to create an interaction design that amplified this feeling of ‘empathy’ I felt when still. When discussing this idea with Critchley, he agreed that “to some extent, attention on bodily processes requires ‘psychomotor’ disengagement from the environment. There is a fair amount of data relating to respiratory control in meditation and yoga, for example, impacting on heart rate variability. Simplistically, the autonomic nervous system is partitioned broadly into sympathetic (fight and flight ) and parasympathetic (rest and digest) bodily control systems, the latter becoming prominent during stillness”[iii]. Does this mean in order to become more aware of our own body we needed to switch off from the environment?

Research has suggested that one does not often think about their level of presence in the real world. They feel it[iv]. With the video footage used in “Feel Inside”, I sought to prompt a sense of empathy and catharsis in the viewer. The video was shot three years before during a residency at the Banff Centre for New Media. It was taken after an exhaustive weeklong shoot. My collaborator at the time, began asking me personal questions, while I my face looked into the camera. Over a period of 10 minutes, I started of laughing and ended up crying.

To create the sensing mode, I used a video camera to monitor the movement of the audience. After the failures of using Flash with “Feel Trace”, the software was written using Max MSP/Jitter[v].

Conclusion of “Feel Inside” The piece was tested four times, mostly at Wellcome Department of Nueroimaging (WDIN) at UCL (shared with the neuroscientists), and once at the Beta_Space at the PowerHouse Museum in Sydney. Parts of the work were successful.

Were the viewers taken on a journey or controlling the journey? “…there was an interaction which grabbed hold of you as you became more engaged – mainly because there seemed to be an obvious response to what one was doing in relation to the installation. I felt more in control of what was going on than I had when I experienced Feel Trace”.[vi] Initial audience responses showed that using movement as an interaction method lead to a stronger sense of engagement that what was achieved in “Feel Trace”,

It appeared that the interactive model promoting notions of stillness, allowed people to resonate more deeply with the images and sound of the installation. Most reported that they found the interaction more ‘poetic’. However, both spaces provided a difficult context to evaluate the audience experience of the work, as the piece required no movement in order to be interactive. When showing the work in the Powerhouse Museum in Sydney, the audience would experience the work in context of the other more interactive and ‘louder’ works in the space. Some participants slowly discovered the interaction and narrative, giving at least five minutes to the work. Others madly waved at the work, anticipating a stronger reaction. A percentage of the audience walked out of the exhibition space with in thirty seconds, not willing to spend more time investigating the work.

Some participants talked about the vulnerability felt by watching the crying portrait video. This was a success, as it seems (through self reporting) that the video did elicit emotional reactions in the user. On reflection, the narrative of “Feel_Inside” was too complicated for the basic interaction design. Also, if the project were to be exhibited again, the size of the display would be changed in order to make the work more ‘feel’ more intimate.

The coding for the piece was created in Max MSP/ JITTER with engineer David Muth. Using this choice of software was robust, with very few crashes. MaxMSP worked superbly with video and limited real-time effects. Using Max MSP was considered a definite success.


Tina Gonsalves. 2010. Empathy and interactivity: Creating emotionally empathic circuits between audiences and interactive arts. Comput. Entertain. 8, 1, Article 3 (November 2010), 14 pages. DOI=10.1145/1857940.1857943 http://doi.acm.org/10.1145/1857940.1857943

Read More

Single user interaction

I looked at Tina Gonsalves for inspiration when I stared experimenting with the pulse sensor, as a way to trigger interaction using bodily responses, rather then conscious choices. It’s powerful work but feels personal and private, maybe not the kind of interaction people would like in large shared spaces. Still, worth considering for dissertation, maybe in a rethinked space.

Tina Gonsalves, The “Feel” series: An Overview:

The “Feel” series (2005-2007) are an interconnected progression of short films and interactive sketches aiming to sense, translate and provoke the psycho-physiology of the audience.

Synopsis of “Feel Perspire” : Feel:Perspire is a psycho-physiologically responsive video installation, using sweat to trigger footage.The participant’s sweat is monitored using a bio-sensor attached to their finger. When the participant becomes calm, images of clouds verge into abstraction, becoming quite blurry and reminiscent of Rothko’s paintings. If the participant became more nervous, the clouds grow to be more stormy and violent, enveloping the participant in their fury.

Building of “Feel Perspire” : From the feedback gathered, I came to the conclusion that “Feel Insula” achieved the naturalistic interaction and engaging content I was searching for. However, I felt the interaction mode of reading movement was too generic and didn’t allow for the more ‘personal’ sensing mode I initially envisaged. This lead to “Feel Perspire”, a psycho-physiologically responsive video installation, using galvanic skin response (sweat) to trigger footage.

We used the sensor mode of the Galvanic Skin Response (GSR) to trigger video sequences. GSR provides a continuous and immediate response, giving participants a sense of control, creating a biofeedback loop. Biofeedback is a technique in which people are trained to improve their health by learning to control certain internal bodily processes. While using the technologies, there is a shift of the subconscious experiences of heart rate, breathing and nervous system activity to a level of cognitive awareness. Therefore, through reflection, participants slowly learn to identify, sense and eventually coordinate the physiological behaviours being monitored.

I developed and tested a few video databases. The initial video footage for the project was taken from a helicopter. I shot out the window of the moving craft. While shooting, I focused on a horizon. Originally, I wanted to use this in a simple narrative. For example, if the participant’s GSR reading was calm, the footage continues to fly out to a horizon. If the GSR reading rose, the footage would to crash to the sea. After testing, this scenario was deemed to be too potent for the viewer, often enticing them to become more nervous than relaxed

I finally arrived at time-lapsed cloud footage. If the participants relaxed, the footage would blur and become ‘rothkoesque’. If the participant become stressed, storms would rolls in, enveloping the participant in their fury.

At the core of Feel_Perspire was a Max/MSP patch that controlled the computation for the system. We ran the patch on a G5 Apple Macintosh computer under Mac OS X. The output of the GSR was read into the patch through the analog audio input. This signal was used to control the experience of the participants, directly triggering the video narrative. The programming was initially created by Max MSP expert, David Muth. It was difficult to attain a smoothness of video with the constant triggering from the GSR data. The video did not work with programming, leading to a conundrum of how to take the project further.

After three months reflection, I began working with programmer, Evan Raskob, a MAX MSP expert based in London. I altered the concept, working with some stock cloud footage, real-time effects and multiple video channels. I wanted to investigate the use of hue and saturation as an expressive modality, so it was essential that this these effects could be achieved with fluid live effects. Itten, in his book, the Art of Colour, expresses how the mixing of pure color with either white, black or gray to form gradients gives ‘expressive’ power. From the perceptual literature, Valdez and Mehrabian, in their book, Effects of Color on Emotions found that a significant amount in the variance of subject’s emotional response to color came not from the color’s hue, but from the brightness and saturation levels of the color.[i]

Enabling real Real time effects of speed, colour and scale effects was an important step in the research. This created a very smooth and responsive video narrative when triggered by GSR

Conclusion of “Feel Perspire” : We tested the work three times. The initial consumer GSR monitor we implemented in “Feel Perspire” proved troublesome, often dropping out and not giving sensitive enough readings, creating frustration in the participant more than anything else. The following GSR reader (usually used for experiments in laboratory settings) was very large, old and bulky, though extremely sensitive to subtle changes in sweat.

The sensor was attached to the user’s fingers, and dressing the sensor was quite a delicate process. This also hindered the natural interaction scenario, as the user had to remain still and sit in an assigned chair. Furthermore, The GSR monitor needed to be tuned in manually 30 seconds after the participant was hooked up to the monitor, making it difficult to work with in public exhibition contexts. As with “Feel_Trace”, the act of attaching the sensor catalysed an arousal in the body. Adjusting the interface on the monitor was difficult and time consuming. Although the readings proved sensitive, the machine only took a reading every three seconds. Another limitation of using GSR is that it primarily monitors arousal, and does not define emotions. Movement of the body and the variability of GSR data sets across multiple participants provided much difficulty.To tackle this, the GSR monitor needed to be calibrated for each participant.

The simple imagery, using real time effects that responded to the GSR reading, provided a fluid biofeedback interaction scenario. Using GSR as a sensor seemed to result in an engagement “…I wanted to do it for a long time. Feeding back physiological signal on a big screen is really great – one feels good involvement in biofeedback process. This is achievable with a small computer screen but the difference may be like watching movie on the TV or in the cinema”. Initial observations demonstrate that participants felt that the video work was analogous to their psychophysiological state, and giving a sense of control. Past research has shown, that when the participant has a modality of control in the environment they experience a greater sense of presence.[ii] “The most unsettling part of the artwork is that after some time, I started to feel like I could control the content of the feedback, while remaining unable to explain how. Falling into the skies accompanied by stormy winds or quietly floating with resting noise, the artwork transported me throughout neatly intermingled settings related to my feelings”. The interaction design assumed that when the GSR level rose, this meant you were ‘stressed’ triggering video of a storm clouds and loud noise to fill the exhibition space. When discussing the project with Picard, she stated a higher GSR reading could mean you were stressed or happy, and I had not creative a narrative that catered for happiness. The limited sensing modality didn’t allow us to differentiate the data to denote an emotional feeling, only ‘aroused’ or ‘calm’.


Tina Gonsalves. 2010. Empathy and interactivity: Creating emotionally empathic circuits between audiences and interactive arts. Comput. Entertain. 8, 1, Article 3 (November 2010), 14 pages. DOI=10.1145/1857940.1857943 http://doi.acm.org/10.1145/1857940.1857943

Read More

Starting work for final project, right now in testing stage with Kinect and PD. The 2 methods for getting tracking data into PD: Synapse+OSC messages and pix_openni PD objetc developed byMatthias Kronlachner.

Conclusions so far: Synapse needs calibration pose for skeleton tracking, pix_openni works without it more of less accurate. I’m inclined towards the pix_openni as it’s suited for a more natural interaction. A interaction workshop is needed to research the most natural way of interaction with the work

Read More

Recent Comments