mac online apple blackjack http://www.euro-online.org

Geisler – Dream of Beauty

 

 

 

 

In the Digital Artwork Dream of Beauty 2.2 Touch Me, 1999 by Kirsten Geisler  a small framed portrait of a beautiful blemish free woman stares directly back at the spectator. The spectator is invited to touch the portrait both through the title of the piece and accompanying labels. This allows the spectator to physically interact with the Beauty. This is an act usually not associated within the gallery space and so is intriguing to the user. They intuitively understand how a touch screen works due to being exposed to it in everyday life, but perhaps do not know the technological process and components behind it. Heidegger explains this state of transparency by saying that “the World of tools is an invisible realm from which the visible structure of the world emerges” [Heidegger, 1978: p69]. Depending on where the DFI is touched ‘she’ will react with varying emotions, laughing, weeping or even blowing a kiss when touched on the lips. The spectator has control of this virtual woman during the interaction and can manipulate ‘her’ reaction as they wish. Due to the instant response of Dream of Beauty 2.2 the spectator knows that their intervention is what has caused the reaction on the screen. The spectator has power over the narrative of this artwork. Geisler’s work adheres to the theory that “computers can pretend to be intelligent only by tricking us into using a very small part of who we are when we communicate with them” [Lev Manovich, 2001: p34]. Dream of Beauty can only react in predetermined gestures, we cannot change the gestures merely play them in a finite number of mathematical combinations. Manovich goes onto explain this type of interactivity as “branching interactivity” explaining that the user affects a predetermined set off actions, but, they however cannot change these actions. “The artist is seen to create an audience activated choosing mechanism” [Rokeby, 1995: p136] giving the spectator a sense of control (mirror) as the result is due to their intervention.

 

 

 

 

Tiffany Holmes – Nosce te Ipsum

YouTube Preview Image

Tiffany Holmes’ Nosce Te Ipsum, 1999 utilises the spectator’s movement towards a screen as a tool. The screen depicts a collage and as the spectator approaches, layers of the collage are peeled away. When the spectator gets to a predetermined distance, all the layers are peeled back and reveal the spectators image at the base of the collage. This embodies the concept that the spectator is integral to the production of a piece of artwork and, indeed, the first piece to the puzzle. “The (new media) artist now attempts to construct an environment, a system of communication and production, a collective event that implies its recipients, transforms interpreters into actors, enables interpretation to mentor the loop with collective action…it places us within a creative cycle, a living environment of which we are already co-authors” [Lévy, 1997: p123]. Holmes deliberately leaves the work ‘incomplete’; it is missing the initial image to create the collage. The piece invites the spectator to walk towards it, this movement physically demonstrates their contribution to the artwork, as collaborator, which is rewarded with the piece itself mirroring the spectator. Interactors themselves become referents of the work. “The works are akin to portraits” [Rokeby, 1995: p153].

Sour Mirror Music Video

Another great effort on creating an individual experience for watching a music video Sour Mirror from Japan utilise your tweeter and facebook account to create user experience!

Facial Expression Inherent or Cultural

Relavists – facial expressions akin to language and learned within each culture.

Darwin(1872) – facial expressions of emotion are inherent. Certain facial movements were acquired to serva a biological adapture function. Their association with emotion over time has become innate. Darwin states that there is a difference between facial expressions of emotions, which are innate and universal, and facial gestures, which are learned and therefore culturally variable.

Tomkins(1962) – innate facial/ emotional expressions, but variations occur between cultures.

LaBarre(1947) – “There is no ‘natural’ language of emotional gesture”.

Birdwhistell(1963) – major central claim that facial and body behaviour is a language, with the same types of units and levels of organisation as a spoken language.

Ekman(1968) – agrees along the same lines as Darwin. He has two rules for controlling facial expression. One which is Universal and the other culturally different.

 

With the Humanities Project we are proposing to use text from tweeter (language) and ascertain its mood or emotion. This emotion will then drive prerecorded facial expressions that follow theorised moulds allowing any culture to ascertain what emotion has been scraped from the internet.

 

Ekman

Ekman diagram looking into the way that a persons facial expression develops with environment/ cultural variability. Eckman agrees with Darwin stating that there are 2 rules for controlling facial expression. One which is Universal and the other that is culturally different. Ekman’s research goes on to explain key basic facial expressions that occur biologically universal to all humans.

Emblems

Looking into Paul Ekman (Adrians suggestion). Perfect research for developing humanities project and raises many interesting points on body language and facial expression. This is proving to support our current work into slowing down emotional responses of participants thinking about different emotions drawing upon their previous experiences. The participant bringing their response to the piece and the viewer being able to make their own readings from this. Currently working on grading all the footage as can be seen from the screen shot some still need some work – TM700 likes to put a green into the shadows!!!

GS Video

GSVideo

A library by Andres Colubri for the programming environment processing. Last update, 05/16/2011.

GSVideo is a library that offers video playback, capture and recording functionalities through the use of the GStreamer multimedia framework.
It follows the API of the built-in video library. It also allows to create custom gstreamer pipelines inside Processing.

 

Much more stable than Processing inherit library. Cheers Adrian.

David Rokeby

YouTube Preview Image

Gathering and Sorting Daemon….

Holtin Glance

Really nice use of servo motors timed with video footage. I like this idea of capturing that moment of uncertainty. Is the face looking at me… it would be really nice to incorporate the spectator into something like this making it a 3 way system…what if the DFI’s started to glance at the spectator or reacting to their movements??

 

Max Fragmentation Reflection

I began this doing project by looking into the concept of fragmentation. From this I thought I would look into how to redraw fine art “masters” images, still lifes in particular. This resulted in me having a look at counter objects, pack, and getcell sencell objects…after unpacking the numbers I thought I could play about with the order of pixels, arranging them or producing music (noise). After a bit of playing around my idea evolved. The sketch below shows how I was unpacking pixel data from an image.

 

With my evolution of idea came my want to capture some sort of image or movie that would allow for some manner to fragment reality in some sort of way, playing with the concept of time and place. This coincided with a visit to a Skye, an Island or ‘fragment’ if you will of the coast of Scotland. Here I was looking out the window and thought about how a day passes unnoticed unless resting in the same location, if one stays still for long enough they can notice the subtle changes and evolution of a day. A nice way to capture this rotation is by capturing a timelapse, a video made out of many still images (or fragments) that are placed together to produce a representation of that day. This allows a speeded up version of the day to play condensing the day into fragments that are strung together giving a sense of speeded animation.

After careful consideration the use of the window that I was staring out of also came to play a key part, as this window shelters my vision of the entire vista only showing me a framed version of the world, likened to racehorse wearing blinkers to shield them from viewing their surroundings. This window frame only allows for a small representation of the entire outside World. A glimpse into the external.

This got me thinking how I could use my new found knowledge of Max to unpack a timelapse movie and produce further fragmentation within the context of a timelapse, arranging it via RGB values, Luma or even allowing it to play via the ambient noise that the microphone would pick up. If I could manage to implement this I would hit my goal of not only playing about with  fragmenting time but allowing the patch/ user to further this fragmentation.

The patch above shows how the amplitude of the mic controls the play back of the video, I’ve also included a drop box so that other files etc can be added to this so that a choice of footage can be made. This was one of my starting points as to how to further fragment the representation of a timelapse.

After playing about with this idea I then realised that should include the possibility of allowing the user to see fragments within the frame, details of the scene that was unfolding before them. I then proceeded to produce a timelapse of the tide coming in on the island of skye, I shot this deliberately through a window as to show a viewpoint of the world and also to hark back to theory about technology being transparent and that design should be a window an interface that you do not see.I then captured timelapse details of this scene to be viewed as fragments of the whole broken down to details so I developed several more drop boxes that would allow detailed footage of rocks and waves to be screened as well as the main footage of the timelapse.

It was then that I also pulled in my previous patches working with RGB and Luma to control the playback of the movie allowing an interface to control the play back of this fragmented day allowing the fragmentation to increase or decrease depending on the users wishes. This allows the user an abstract sort of control where there touch and sounds drive one or all of the screens. I’ve included several toggles to allow the user to turn functions on and of allowing them to find an aesthetic they are like or simply fragment individual screens further.

 

 

 

I would say that I’m pretty happy with the results of my sketch with the user allowed to play with options and or their ambient input. I think that the overall aesthetic of the piece could be evolved to communicate my idea better but for the main part I am happy with achieving my goal of fragmenting time and view. I think maybe playing about with other windows or pop ups would be beneficial, maybe allowing for face position or mouse position and ambient mic noise to bring up details of the scene or drive further screens. I think it would be worthwhile chatting to Adrian about this as he successfully implemented facial recognition to change viewpoint when the viewer moved. My idea would be to produce more window/ frames if the viewer moved or increased their contribution. A possibility for evolution.