All posts in technology

The Worlds Largest Photobooth


Recently LAB consulted on the production of The worlds Largest Photobooth which has just opened at the Nottingham Contemporary. It’s a free-to-have-a-go interactive installation to promote the current Diane Arbus exhibition. It looks like people are having a lot of fun with it, so if you’re in Nottingham get down there and have a go!

When you sit on the stool a web cam behind the 2 way mirror detects your face and tells a Canon DSLR (also behind the mirror) to take 4 shots of you. These photos are then processed and compiled into a Polaroid style set of 4 images, projected onto the 30 ft wall behind the booth (as shown in the image above), and uploaded to Flickr. The photos are given a title, description and added to a photoset in Nottingham Contemporary’s photostream.

For the most part the booth’s custom software is open source. All the face detection, camera controll and image processing was done by Brendan Oliver (who’s got a more detailed write-up on his blog) using Openframeworks, along with some Flash, PHP and the Canon SDK. A few of us from LAB were involved in implementing the booth’s Arduino controlled flash and an automatic Flickr uploader written in processing.


Generating Contextual Narratives

Generating Contextual Narratives: Test_01 from Mark Selby on Vimeo.

Generating Contextual Narratives is a project, made in collaboration with Mike Golembewski, about exploring ways of generating more experientially and contextually appropriate narratives. The broad concept here is that current technological trajectories suggest a future where all data is captured indiscriminately and profusely, and so it will become harder and harder to engage with records of experiences in personally meaningful ways. Rather than ‘total capture’, the recording of everyday experiences might be tied more closely into the enactment of those experiences through the objects that we use to do so. The resulting data (photos, texts, sounds etc) are contextually specific to the events that they depict, allowing for more meaningful narratives of those events to be constructed and consequently, enable more meaningful encounters with memories of experience in the future.

Bicyclopse (working title) is the first (rough)  prototype in a series of devices that investigate how we might use technologies to achieve this. It’s a camera made with an arduino controlled iPhone running a custom application mounted on the front of a bike. The iPhone’s camera is triggered by a tone sent fron the arduino everytime a reed switch attached to the bikes fork is closed by a magnet on the front wheel.  This means that one photograph is taken for every revolution of the front wheel.

These still photographs are then compiled to make a film. Visual and temporal distortions of the video narrative are determined by the function of the bike – as the bike speeds up, the rate of capture increases and so the footage appears to slow down.  Visual distortions occur when the bike turns a corner or is ridden over a rough patch of road. This is caused by the quick movement of the camera, and the  way that the iPhone camera’s CCD is scanned from side to side (See Wikipedia for explanation). In combination, these effects give a point of view specific to the bike and the way in which it is ridden.

Camera Explora at Territorial Play

camera explora_territorial play

Camera Explora recently appeared at Radiator’s Territorial Play , the opening event of their Tracing Mobility programme. Unfortunately I didn’t get much chance to publicize this fact in the run up to the event as I was too busy trying to work out what kind of string would give the most friction on a rubber pulley.
Embroidery cotton is quite good.

There were two main elements – activity and installation. The activity bit involved people going out and exploring the city using the camera, which is now a repackaged Google G1 phone running a custom made Android application. That bit was programmed by Sam Meek, who’s done a great job in spite of the somewhat … ‘limited’ hardware.

Those that took part seemed to respond well to the experience. A few said that they found it frustrating at first to be so constrained in what they could take photos of, but eventually began to resist the urge to photograph the first thing they came across and took the time to have a proper look around first.

camera case prototype

The second part – the installation part – was an arduino controlled CNC plotter (hence the business with the string) that drew lines onto a paper map of the city between  the locations where each photograph was taken, as they were being taken. Each photo represents, in theory, something the photographer found interesting or noteworthy. Physically connecting these instances on a paper map ties them all together. It links them in memory and space, as well as providing a tangible, non-photographic mnemonic of those experiences.*

The aesthetic of the plotter is quite rough. Although it’s absolutely a work in progress this was, for the most part, intentional – because it was an installation rather than a product design I wanted it to look like the kind of eccentric, unrefined, but very personally engaging and valuable machine that someone might have built for themselves. The details of that were worked out by just building as much of it as possible out of stuff that I had lying around. Whether or not that was the best strategy is up for debate.

plotter closeup

The projects is about exploring new places, so one concern leading into the event was that because most of the participants would be from Nottingham, the intended experience might be somewhat diluted. However, even those that were familiar with the city enjoyed actively seeking out things that they might not have seen or noticed before, which certainly seems to suggest more attentive exploration of the city. Some even requested to keep the photos they had taken, as well as the route map that had been drawn, when they returned. It’s nice when things like this come out in testing.

Anyway, not an especially in depth write-up just yet – think I’d need to run it again to do that. There were also a few minor technical issues that we couldn’t iron out in the time available. So although things didn’t run quite so smoothly as we would have liked, it helped us see exactly what was and wasn’t right about the prototype both technically and in terms of the design. There’s nothing that can’t be fixed.

Not bad for a first go. Fun too – it’s always good to see people using and enjoying something that you’ve made.

* This is not to say that tangible things are necessarily, or inherently any more or less valuable than digital things. One of the aims of the project is to investigate ways of generating meaningful records of experiences, and the play between digital and physical things is just one way of looking at how to do that.

My Work Here is Done.

Photo from Dumbledad

The photo above shows Tim installing his version of the code to test the Photobox project that we’ve been working on before they are given out to our research volunteers.

Last Monday was my last day at MSRC, and my part of this project is done. The Photoboxes (I need to think of a better name) will be sent out soon under the watchful eye of Dave Kirk in what will hopefully be a valuable learning experience that teaches us a few things about excavating digital archives.

Seeing a project through from conception to the stage where it will actually be used by people is still very exciting and immensely satisfying even though (or possibly because) it’s not a commercial product, and there are only 3 of them.
I’ve worked on commercial and professional projects before, but they were never things that people would regularly engage with physically and emotionally. They were more things to be looked at and cherished as desirable objects. There’s value in that of course, but it’s great to be involved in a different way of doing things.
I can’t wait to see how the study turns out. It’ll be interesting to see how people respond to a piece of technology that operates so slowly, and whether or not the participants will experience the emotional reactions that we’re going for.
Who knows.

Anyway thanks to Tim, Dave and Richard for giving me the opportunity to see the project through, and to everyone else at MSRC. I had a great time.
Hopefully I’ll be back some time!

Daydreams: Rehearsing the future.

Recently I read a fascinating article called The Secret Life of the Brain in New Scientist.
By way of a brief overview, the article basically describes a ‘default state’ that our brain diverts to whenever we aren’t actively using it to solve problems or perform tasks etc. Scientists identified areas of the brain (see above) that commenced intense activity once volunteer test subjects were in an apparent state of rest.

This amazing organ, which accounts for only 2 per cent of  our body mass but devours 20 percent of the calories we eat, fritters away much of that energy doing, as far as we can tell, absolutely nothing.

“There is a huge amount of activity in the [resting] brain that has been largely unaccounted for.” Says Marcus Raichle… “The brain is a very expensive organ but no one has asked deeply what this cost is all about”.

Firstly it’s fascinating that the brain can use that much energy, or conduct processes that require that much energy, without our even realising that it’s happening, but maybe it’s fairly obvious when you think about it – if every process that our brain performed was intentional it would take forever to get anything done.

Surely if these secret neural processes are that ‘expensive’, they must be quite important?

through the hippocampus, the default network could tap into memories – the raw material of daydreams. The medial prefrontal cortex could then evaluate those memories from an introspective viewpoint. Raichle and Gulnard speculated that the default network might provide the brain with an “inner rehearsal” for considering future actions and choices.

I’ve read about this connection between memory and the future before in Future Recall: Your mind can slip through time (again in NS), an article which eventually lead to Memorascope,  which was a project about the effects that emotionally and memorially devoid ‘Non-Places’ (Auge) might have upon our ability to imagine the future, and whether or not it might be possible to use ‘prosthetics of memory’ (Landsberg, A.) to associate memories (prosthetic or otherwise) with these spaces. Primarily I was thinking about how these prosthetics of memory, and the prosthetic memories (Landsberg, A.) that they create, could be conciously used and manipulated.

So, if daydreaming is “the ultimate tool for incorporating lessons from our past into our plans for the future”, but is an unintentional thought process based on memories that are not necessarily our ‘own’ it would seem that we have very little control over what they might be. Does this in turn mean that we don’t have as much control our futures as we might think? But then, does the sheer volume of prosthetic memories made available through digital and communication technologies also mean that we have  more of the raw material required for imagining future possibilities to hand than ever before?

The question of how our technological encounters affect the creation, storage, recollection and dissemination of the materials that construct our memories, and the consequences that these technological prosthetics of memory have  upon our everyday experiences, is a recurring theme in my work and something that I consider important, but it would be good to go further than that, and maybe that’s where daydreams and The Secret Life of the Brain might come in; How do these things affect the rehearsal, or imagining, and potential realisation of our personal futures? Whats does it mean if they do? And what is the significance in the staggering amount of time and energy devoted to daydreams?

Anyway just some thoughts,  but I feel like there’s a lot of potential in daydreams as an area for design investigation, and at the very least, its an interesting article.