by adrianpark on January 14, 2011
This is me finally getting around to properly documenting the Doing mini exhibition project. I’ve previously produced some documentation as part of a Thinking assignment, but this post is dedicated entirely to the thing itself. The main reason I’ve stalled for this long is that I wanted to take some decent images of the piece. The design and aesthetics of the object itself were really important for what I set out to achieve so, whilst I could have taken a few snaps with my iPhone, I didn’t feel that these would adequately capture the little details I put so much energy into. Today I finally found the time to produce some decent images. So, with the excuses out the way, here is the documentation.
Firstly, a brief overview of what it is. At it’s most simple, the piece uses the video feed from a webcam as the source of image data. The program on the computer takes this data, processes it and outputs the data as an image with the option of recording a video of the process. There are two buttons on the camera, one to select different modes and another to start and stop the processing of image data and to clear the screen. The modes are different algorithms the program uses to process the data. The output ends up looking like a painting and the processing of the data itself looks similar to the painting process (this was not intentional on a conscious level but with my love of painting, perhaps it was on a subconscious level?). An important aspect of this is that the image data being processed is updated with each new frame of video. That is to say, the program will ‘paint’ whatever it ‘sees’ in front of the camera at any one moment. This means that if something moves, it will paint the movement, resulting in some interesting effects when the camera or subject is moved as the program is working. There is also a tilt sensor in the box to detect when the camera is on its side so that the screen image is rotated accordingly. Being a very simple tilt sensor it doesn’t know which direction the camera is tilted so it assumes it’s clockwise! Finally, there are a couple of green LEDs – there function is explained below. The buttons, tilt sensor and LEDs are all controlled by an Arduino Nano which communicates with the software on the computer. Next, I’ll deal with why I built the camera.
I produced two printed leaflets to accompany the piece which were handed out at the presentation. These do a fairly good job of outlining the thought process that resulted in the physical device I built. As the ‘Design Rational’ leaflet explains, the design of the camera borrowed heavily from Kodak’s Box Brownie. In the project’s original conception, I was going to buy and actual Box Brownie, gut it and build my components into it. Thankfully, good sense got the better of me and I decided to embrace Maker culture rather than destroy such a beautiful piece of history – more about the build below. I figured it would be nice of the printed sheets complemented this aesthetic so I sought out some old Kodak Camera ad’s and based the design and layout of the leaflets on these. The ‘Inspiration’ leaflet covers the thought process and questions that lead me to produce this piece. Both leaflets can be download using the links below. After the links is the ad’ I based their design on.
So, onto the camera itself. As mentioned above, I was originally going to hack an actual Box Brownie but decided to build something myself. I found the following images which I used as inspiration.
My initial thought for this was to build the basic box with wood or thick cardboard and then cover it with leather or faux leather. I visited the local Pound Stretcher in the hope of finding a faux leather product I could pillage the covering from. I found a range of simple storage boxes covered in faux black leather. Bingo! I bought two of the smallest boxes with the intention of using the faux leather from one to construct the handle and latch for the other box. With this done, I visited the local B&Q and stumbled across a stack of wood offcuts for 75p a piece. Bingo again! I bought one piece which was enough to construct a wooden frame I could slide in and out of the box and which would serve as the framework for the electronic components. I then set about cutting and finishing all the holes for the electrical components. By the time this was all done I was getting really excited about the project. I was really pleased with physical object and it felt good to hold and look at so I figured I was on a good trajectory. next was the software to drive it. At this point I hadn’t even done any tests to see if I could do what I wanted to do which was a bit of a risk but I figured that would make the challenge interesting! See the gallery at the end of the post for images of the completed box.
The idea for the software came about as a result of my questions around the nature of the digital photograph. As I thought about pixels, I wondered why they are usually square, what would happen if they weren’t square. I realised that pixels have an implicit size dictated by the size of the grid they form a part of. I wondered what would happen if that size was ignored and they were drawn at a greater size than the size the grid dictated. At a basic level, the software does just this – it takes a pixel and draws a shape of the same colour at a size larger than the initial pixel. The first version of the software drew every single pixel in the grid in this manner and I quickly realised that all I’d achieved was to create a blurred version of the original image. The next key modifications were to draw only a limited number of the original pixels (randomly chosen) and to layer several variations of these pixels over one another with varying changes in size, shape and transparency. This resulted in a painterly effect I really liked. Everything after this was essentially playing with the ‘painting’ algorithm to achieve different effects. The only other key part of the system is some basic image manipulation I do to the raw data – warming or cooling filters, increase in contrast, diffuse glow e.t.c.
The software was built using Processing, mainly because it forms an important part of our course content and it forms a natural partner to Arduino. With extensive experience with ActionScript, I found I was able to hit the ground running and I really loved the ease with which I was able to do some quite complex data processing.
When the piece was complete, I started having a lot of fun ‘painting’ all sorts of different subjects in different lighting and seeing what the camera was capable of. The fact that I was having so much fun with it reassured me that I’d produced something interesting to present in the mini exhibition. I had a little time before the presentation though so I added a few finishing touches. The first was the tilt sensor which, as explained above, was limited in functionality but which I included mostly to demonstrate the principle and a grasp of the hardware and software we’d been looking at in lectures. The final touch was to add a couple of green LEDs inside the box. From the outset I’d wanted to create an object of intrigue and wonder. The green glow was subtly visible through the holes for the components and the thin gap between the box and its lid and I programmed the light to gently pulse while the camera was in use. I was really pleased that this subtle touch was noted and mentioned in the presentation!
Finally, I have a lot of ideas I’d like to try out with this piece in time. I’d love to rebuild it with metal levers and dials as seen on the original Box Brownies above rather than modern buttons. I’d also like to build in a few retro dials that can be used to modify some of the processing parameters without having to modify and recompile the software. I think there’s a lot of scope for playing with and refining the processing of the image data – for example, I’d like to get it to do a better job of rendering detail. I’ve discovered that the webcam works best in dim lighting and doesn’t work at all well in full daylight – I suspect it’s designed this way since webcams are intended for use indoors. I’d like to build a better image capture system that I could use outdoors and would like the ability to utilise a variety of focal lengths. Ultimately, I’d love to build the whole kit into a single device which was ultimately the idea I set out to demonstrate. Which leads me on to perhaps the obvious next step – an iPhone and/or iPad app (or [insert favourite mobile OS here]). Watch this space.
Below is a gallery with various images of the camera and some examples of images it produces. After that is a video of the ‘painting’ process.