mac online apple blackjack http://www.euro-online.org

Project Proposal

 

 

 

‘Magic and Merger’

 

 

 

 

 

Project Specification

James Davoll

 

 

 

 

 

 

 

Contents  
 

 

 

 

 
  1. Project

1.1 The main aims of project

1.2 Project Description

6. User Experience
   
2. System 7. Theory
   
  1. Software 
8. Placement within Current Academic & Artistic Research

8.1 Literature

8.2 Online Reference

8.3 Software

  1. Hardware

4.1 System Specifications

4.2 Install Diagrams

 

9. Code Examples
  1. Face Recognition

5.1 Illustration

5.2 Viola-Jones Method

 

10. Schedule
  Appendices

 

 

 

 

1. Project

1.1 The main aims of project

  • Create a striking piece of digital art that explores the spectator’s interaction with a responsive system.
  • Invite the audience to self reflect upon their affect on the digital world and vice versa.
  • Animate a highly visible space.
  • Generate greater awareness of experimental digital art amongst the general public by involving them in its creation and exhibiting it in a public space.
  • Encourage an ‘open discourse’ of personal experiences with the work and produce an engaging piece.

 

1.2 Project Description

‘Magic and Merger’ will create an interactive postcard like image. Traditionally a postcard shows a view that the individual can claim ownership of. One can purchase the image and then have the power to select and invite others to view it, the recipients of the postcard, and further give the image meaning by what one chooses to write on it.  The postcard comes to represent the experiences of the  individual and the individual is in full control of the image and its elements.

 

‘Magic and Merger’ will subvert this relationship between image and individual by allowing the image to ‘see’ the viewer and respond to their presence. The closer the viewer looks the more the image will ‘look back’. This will create an open discourse asking the question, who is the viewer and who is being viewed?

 

From afar the image will take the form of a familiar seaside scene as seen on many a postcard. This scene will then be subtly animated drawing the viewers attention to movement. As the viewer approaches the image, the image will zoom and transition between layers, introducing characters that can look at the viewer and track them around the space. If the viewer walks back again the image will return to its original format. If the viewer approaches another area of the image more will be uncovered unveiling other characters that react to the viewers position. This will happen in three stages depending on the distance that the viewer is from the screen. Each level closer to the screen reveals more to the viewer. As the viewer tests and plays with this idea by moving around, the many details of the image will be revealed. This movement within the image will produce a self reflective experience within the viewer as they become aware that it is their body that is causing the shift within the image.

 

  1. System

 

 

 

 

 

‘Magic and Merger’ will be created by producing code that will allow a Web-camera or Kinect to search for a viewer, using face recognition. If a viewer is found the system will configure the location of the viewer and load the relevant image or animation

that has been predetermined for that position. The system will continue to check for viewer movement and repeat this process until the interaction is complete by the viewer leaving the space. The sketch will also be running an audio file in the background so that the view may be further absorbed into the scene.

 

3. Software

The project code will be developed using Processing, with integrated OpenCV, OpenKinect and GSVideo Libraries. Processing is an open source programming language and environment that allows the creation of animations, and interactions. The OpenCV library will allow for a much more advanced facial recognition system to be integrated into the code without having to complete complex calculations, allowing more efficient plotting of the viewers location, after testing Processing’s own facial recognition library it became clear that it was not advanced enough as it became confused when more than one viewer was present and when the viewer left the field of view. The OpenCV library handles these tasks much more accurately with the ability to personalise the sensitivity and functionality. The GSVideo library is an alternative video library for processing that has proven to  be more stable and able to handle a higher bitrate. The OpenKinect library allows Processing to integrate Microsoft Kinect motion sensor device into sketch.

 

Processing allows the sketch code to be exported as an Application. This will allow easy boot-up when the computer is turned on. The system can be completely automated to ensure that the project is up and running at the desired times and powered down when installation is closed.

 

  1. Hardware

4.1 System Specifications

  • 2.4 GHz Intel Core Duo processor
  • 1 GB of system memory (RAM) – dedicated through Processing preferences
  • USB 2.0 for integration with webcam. Also possibility for mouse or keyboard for debugging (these could also be bluetooth).
  • miniDisplay to VGA/HDMI adaptor and cable.
  • Logitech Webcam (chosen for dependability and easy integration as found out by previous projects) or Kinect.
  • Projector (with rear projection settings/ key-stoning)
  • Screen
  • Mirror for back projection if space is limited
  • 2x speakers outputting 70-95db

 

 

4.2 Install Diagrams

 

 

 

The decision to place the hardware for ‘Magic and Merger’ behind the screen has implications for the installation of the work. Hiding the equipment will allow the viewer to gain an unobstructed view and interaction with the piece. This has inherent problems as when rear projecting an image depending on space several qualities must be understood. If their is enough space behind the screen a simple rear projection can be achieved, setting the projector to rear projection mode to flip the image on the horizontal. If there is not enough space a mirror must be incorporated into the install allowing the throw of the projector to double. This technique will need the the horizontal and possibly the vertical flipped depending on projector location.

5. Face Recognition

5.1 Illustration

 

 

 

 

 

 

 

 

 

 

 

Viewer is encountered by

System

 

 

 

 

 

 

 

 

 

 

 

 

OpenCV facial recognition identifies

viewers face

 

 

 

 

 

 

 

 

 

 

 

 

Distance (z co-ordinate) and position

(x co-ordinate) are calculated to

display relevant image

 

 

 

 

 

 

 

 

When viewer approaches, z co-ordinate

is calculated  to unveil 2nd level

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

When the viewer moves perpendicular to the screen (x co-ordinate) is updated to play animation

 

 

 

 

 

 

 

 

 

 

As the viewer approaches the size of the face detection area will be mapped to calculate the distance the viewer is from the screen (z co-ordinate).

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

As the Viewer approaches further layers will be exposed unveiling more detail from the image.

 

 

5.2 Viola-Jones Method

OpenCV’s face detector uses a method that Paul Viola and Michael Jones published in 2001. Usually called simply the Viola-Jones method, or even just Viola-Jones, this approach to detecting objects in images combines four key concepts:

  • Simple rectangular features, called Haar features
  • An Integral Image for rapid feature detection
  • The AdaBoost machine-learning method
  • A cascaded classifier to combine many features efficiently

More information on this can be found at http://www.cognotics.com/opencv/servo_2007_series/part_2/sidebar.html.

 

 

6. User Experience

 

 

 

As the viewer moves across the video playhead/ or still images will be mapped, allowing the DFI to follow the viewer around the screen. This will be done by the system requesting the images either side of the mapped x co-ordinate allowing smooth transitions and no interruption to the program.

 

 

 

 

 

My piece intentionally has no fixed meaning but suggests a narrative, ultimately it is the viewer that will bring their own experiences to the piece.

 

7. Theory

The title of this piece comes from an excerpt from Kevin Robbins’ work, Into the Image: culture and politics in the field of vision. When discussing technological forms Robbins choses the words ‘magic’ and ‘merger’ to describe the process and product of simulation. Robbins observes that the illusion or ‘magic’ of virtual reality is the perfect tool to ‘merge’ the viewer with the computer. ‘Magic and Merger’ will investigate the nature of the viewers investment in digital forms.

 

‘Magic and Merger’ has developed through previous practice carried out on ‘little Brother’ and ‘Emblematic’ projects, these projects are documented on the blog www.dm.ncl.ac.uk/jamesdavoll. ‘Magic and Merger’ has evolved alongside these projects. Theoretical research focused on new digital artwork and media have grounded the themes of this project. For example, Manovich articulates that digital art “is not something fixed once and for all, but something that can exist in different, potentially infinite versions” (2001:36).

 

Digital artworks have been labelled as interactive

implying that the spectator has the power to be an active participant in the unfolding of an artwork’s flow of events. ‘Magic and Merger’ will utilise this knowledge by allowing the spectator to “influence or modify” the artworks form (Lovejoy, 2004:167). Their bodily interaction with the piece will act as a  perceiving device, a tool for them to create and understand the work. The viewers body is no longer a static “shell” for the eyes to peep out of but is alive and animate with interaction (Ponty, 2002). The interaction will be playful and encourage the spectator to experiment with movement. ‘Magic and Merger’ will have a finite number of mathematical variations and details to be revealed however as Manovich (2001) explains with this type of interaction known as branching interaction, although the spectator can only affect a predetermined set of actions,  they experience will be unique and different to anyone else’s.

 

‘Magic and Merger’ hopes to allow the viewer to recognise that their movements affect the artwork and that they are the tool, the “embodied human” (Hansen, 2004:129), by which the digital domain and reality are communicating. The spectator experiences a sense of qualia, the sensation of experience and subjective conscious decision making. “Qualia is the sensations you are conscious of” (Ramachandran interview TLC), the spectator is aware of their actions and environment and they can reflect on these. Critical reflection or qualia is integral to the experience of an immersive artwork as the “act of seeing modifies what is seen” (Fraser, 2011). Colson explains this affect of self reflection differently stating that the viewer has a natural suspicion of “environments that seem to be ‘alive’, and aware of the visitors presence”. Colson goes on to state that “it is unnerving to feel that free and spontaneous actions are being recorded and that they have become active agents within an unseen world of hidden control systems” (64).

 

It is this personal connection, fears, concerns, ….. to a piece of art, particularly a photograph that ‘Magic and Merger’ endeavours to explore. Throughout the history of photography there has been discussion around the quality of truth and reliability of a man-made image. A man-made image will always show a constructed reality. However, when encountering a traditional chemical photograph one can be reassured that it is constructed as a direct result from an encounter with the real World. This assurance is not as firm ion the digital age.  With the evolution of digital photography fakery, photoshop and airbrushing is wide spread and what is real or true to life in an image is unclear. The photographic process has now shifted from chemical to digital. A single photograph is broken down into binary numbers the real is represented by these numbers and each number can be altered. With this in mind if the real can be represented with numbers it therefore suggests that this process can be reversed and that the numbers can be used to create a reality of their own. ‘Magic and Merger‘ hopes to raise this suggestion and question its merit. Lister puts forward that the image is created by digital technology represent a  ‘derealised’ World. ‘Magic and Merger’ will explore the role of the viewer in this relationship.

 

Henning (1995:219) observed that chemical photography “held a mirror to reality”. ‘Magic and Merger’ will attempt to hold a mirror up to its own derealised world in order to highlight the power struggle between viewer and image. This ties in with Rokeby (1995:133) theory that “interactive technology is a medium through which we communicate with ourselves – a mirror”.

 

In turn the piece will address the polar and apocalyptic claims surrounding the future of digital technology. Does this uncertain field of ‘Magic and Merger‘ hold a future of control or freedom?

 

 

  1. Placement within Current Academic & Artistic Research

 

8.1 Literature

Barthes R. (1993) Image Music Text. Fontana Press

 

Baxandall. (1991) Exhibiting Intention. Smitsonian Institute Press

 

Bentkowska-Kafel. (2005) Digital Art History. Intellect Books

 

Berger J X. (1990) Other Than Itself. Aperture

 

Berger J X. (1972) Ways of Seeing. Penguin Books Ltd

 

Bolter J D, Gromala D. (2003) Windows and Mirrors. MIT Press

Bolter J D, Grusin R. (2000) Remediation. MIT Press

 

Colson R. (2007) The Fundamentals of Digital Art. AVA Publishing

 

Grau O. (2003) Virtual Art From Illusion to Immersion. MIT Press

 

Hansen M. (2004) New Philosophy for New Media. MIT Press

 

Heidegger M. (1978) Being in Time. Wiley-Blackwell

 

Lévy P. (1999) Collective Intelligences: Mankind’s Emerging World of Cyberspace. Basic Books

 

Lister M. (2005) The Photographic Image in Digital Culture. Routeledge

 

Lovejoy M. (2004) Digital Currents: Art in the Electronic Age. Routledge

 

Manovich L. (2001) The Language of New Media. MIT Press

 

Ponty M. (2002) Phenomenology of Perception. Routledge Classics

 

Robbins, K. (1996) Into the Image: culture and politics in the field of vision. Routledge

 

Rokeby .D (1995) Transforming Mirrors: Subjectivity and Control in Interactive Media in Critical Issues in Electronic Media. State University Press

 

Rugg J. (2007) Issues in Curating Contemporary Art and Performance. Intellect Books

 

 

 

 

8.2 Online References

Bartlem E. (2005) Reshaping Spectatorship: Immersive and Distributed Aesthetics

Available at http://seven.fibreculturejournal.org/fcj-045-reshaping-spectatorship-immersive-and-distributed-aesthetics/ (Accessed 10/05/2011)

 

Buchenau M, Suri J F. (2000) Experiencing Prototyping.

Available at http://portal.acm.org/citation.cfm?id=347642.347802  (Accessed 10/11/2010)

 

Kovlov V. – Brave New Digital World

http://www.themoscownews.com/arts/20110310/188483181.html (Accessed 02/06/2011)

 

Kwastek K. Interactivity – A Word in Process.

Available at http://theclockspot.com/u/for02-awordinprocess.pdf  (Accessed 08/11/2010)

 

8.3 Software

Shiffman D. (2008) Learning Processing. Elsevier Inc.

Fry B. Visualising Data

Maeda J. (2004) Creative Code. Thames and Hudson

Greenberg I. (2007) Processing; Creative Code and Computational Art. Friendsof

OpenProcessing forum, processing.org

http://ubaa.net/shared/processing/opencv/

 

9. Code Examples

Developing code examples can be found in the online blog documentation, found at www.dm.ncl.ac.uk/jamesdavoll. Many aspects of the code have been developed through smaller projects such as little Brother (documented on blog). By integrating the developing code into smaller projects I am able to complete user studies as well as test the stability of the program.

 

 

 

 

 

 

10. Schedule

 

Week Ending Development

 

10/02/12
  • Begin Code combining previous little Brother Code
  • Sketch out code functions

 

17/02/12
  • Sketch out photographic scene ideas/ movements etc.
  • Story board potential interactions

 

24/02/12
  • Start to pull together Hardware
  • Test webcam options

 

02/03/12
  • Test photographs of scene

 

09/03/12
  • Develop first level of code

 

16/03/12
  • Combine OpenCV face detection into Code

 

23/03/12
  • Test OpenCV recognition distances
  • Photograph scene development

 

30/03/12
  • Work on photographic images
  • Sketch out potential scenarios
  • Develop cinemagraphs/ images

 

06/04/12
  • User Tests on cinemagraphs/ images

 

13/04/12
  • Photograph more scenes
  • Testing out different movements

 

20/04/12
  • Develop first level of code

 

27/04/12
  • Configure face recognition distance and size
  • Work on images

 

04/05/12
  • Finalise distancing with face recognition

 

11/05/12
  • Begin work on x co-ordinate location

 

18/05/12
  • Run the integrated little Brother code within sketch
  • Link up x co-ordinate

 

25/05/12
  • Finalise first level of code with images

 

01/06/12
  • Complete first level of code with images

 

08/06/12
  • Begin to configure z co-ordinate to unveil next layer
  • Produce animation between layers

 

15/06/12
  • Work on second level images
  • Begin to load into sketch

 

22/06/12
  • Begin work on x co-ordinate to animate images/ cinemagraphs

 

29/06/12
  • Finalise second level of code with images

 

06/07/12
  • Complete second level of code
13/07/12
  • Work on third level images
  • Begin to load into sketch
20/07/12
  • Finalise third level of code with images

 

27/07/12
  • Complete third level of code

 

03/08/12
  • Check combination all images within sketch
  • Check referencing of images to distancing and x co-ordinates

 

10/08/12
  • Test system and check for any bugs etc.

 

17/08/12
  • Integrate projector into system

 

24/08/12
  • Projection Testing

 

31/08/12
  • Projection Testing
  • Configure optimum distance for viewer positions
  • Finalise Code

 

07/09/12
  • Final system completed
  • Final Testing of sketch to asses stability and for per-longed periods
  • User experience tests for presentation of installation

 

14/09/12
  • Exhibition duration, including setup and take down
  • 18th September setup
  • Exhibition taking place 20th to 23rd September

 

21/09/12
  • Presentation 21st September
  • Collate visitor feedback
  • Conclusions and development of project

 

 

 

Appendices

Documentation of experiments, thinking and pseudo code can be found on the already mentioned online blog www.dm.ncl.ac.uk/jamesdavoll.

User Tests 2

https://vimeo.com/42199381

<iframe src=”http://player.vimeo.com/video/42199381″ width=”500″ height=”226″ frameborder=”0″ webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe>

User 1

As User 1 interacted with the system she voiced her discomfort of having a large face follow her around the space. “it has an unsettling effect”. User 1 was very cautious when moving around and took her time, slowly edging closer to the screen. In her interview afterwards User 1 stated that she had lost her comfort zone, as she expected the image to be still due to its aesthetic. The movement of the image to her physical location made her feel awkward and she said she felt “unprepared as an unadventurous type of person”. She voiced that others however may feel more comfortable. User 1 also said that it seemed that she had passed an invisible barrier. With concern to the set up (these trials took place in a messy studio) she said that the lightness of the room and general mess was off-putting.

Overall she said that she would have felt allot more comfortable watching someone else interact with the system rather than herself and that the piece was deceiving in the fact that it looked “innocent” but was not when approached.

User 2

User 2 was a more direct character walking directly to the centre of the set up. From there he played around with having the face follow his movements and was less awkward with the interaction. When asked about his experience he mentioned the voyeuristic nature of both being watched and watching. He commented that it was almost a stalemate. He mentioned that the trig may be out a little as direct eye contact was lost on  a few occasions when the user moved to different distances around the space (this will not be as large a concern when it comes to the final piece as this last video will only be displayed within a certain distance from the screen. Allowing the playhead to be adapted to maintain the needed eye contact,

User 3

Another user who felt uncertain about the interaction but due to their background were more interested in the working of the system rather than the experience. They questioned more about how the system worked rather than the actual experience. (This may mean that I will now open up the tests to  larger demography to allow for a more diverse range of responses). User 3 stated the connection to George Orwell however and after our conversation was interested in seeing a more developed prototype as they thought that it would be more engaging when the levels and transitions between video was achieved.