All posts by Mark

Visualising Climate Change

In a previous post I mentioned that I was working with Active Ingredient on their current project, A Conversation Between Trees. Well, that’s exactly what happened and we recently spent a few days in Rockingham Forest, Northamptonshire installing the work in the Fineshade Woods art centre. You can read up on the project over at the site, but for now I’m just going to talk a little about what I worked on.

The main part of the installation involves  projected visualisations developed in Unity 3D that show real time environmental data from sensors in trees in Sherwood Forest UK, and the Mata Atlantica, Brazil. To compliment these, we wanted to create something that would add some historical context to the data. Global CO2 levels in particular change slowly, and by very small increments that aren’t linear or continuous, so we wanted to add a more accumulative and temporal impression of the data allowing more of a ‘big picture’.

 

The machine draws out CO2 data taken from the Mauna Loa observatory in Hawaii. This dataset is the longest continuous record of global CO2 data available, and has monthly readings dating back to 1959. Having data that covers such a large period of time allows us to depict the behaviour and effects of the increasing CO2 levels over the years.

 

The machine consists of a revolving circular platform and an arm that moves between the center and edge of the paper depending on the CO2 level. The revolving platform represents time with one revolution of the platform representing one year. After one revolution the paper is removed and the machine starts again. Because each year is on a seperate sheet of paper, it gradually builds a stack of paper marked with 52 years worth of CO2 fluctuations.

The data readings are  monthly, so within one revolution there are 12 readings, and the arm plots lines between each data point. Arduino controlled stepper motors drive both the platform and arm, and the steps needed to draw lines between data points are calculated using Bresenham’s algorithm. This is done in Processing rather than Arduino, as it it made sense to do all the calculations on a laptop rather than the arduino itself. The processing sketch then just passes simple movement commands to the arduino which drives the motors accordingly.

 

Lines are scorched onto the recycled paper using a soldering iron. We did a lot of experiments to test which papers scorched nicely, but didn’t burn, and which speed / heat combinations left the nicest mark on the paper. We’re not sure we’ve got this quite right yet and will continue trying different tips for the soldering iron and different speeds for the new surface area and heat combination that this will create.

As well as forcing an uneasy contradiction by using a carbonising process to make marks, the act of burning symbolises the relationship between global CO2 levels and temperature.

This brings us to an interesting point of discussion that has come to light through doing this project. The team have discussed many times the role of technology led human intervention in environmentally engaged artwork. As ever, we have no answers, but these projects are about presenting information to provoke debate and dialogue around some very serious but complex issues without being prescriptive, or trying to force our views on the audience.

This is still the first prototype, there are a few bugs to work out, and some more in depth design decisions to make but its a good starting point, and by the time the next exhibitions come around it will, of course, be flawless.

Urban Immunology

prototype image

 

LAB have undergone a few changes recently. Following our run of workshops throughout Sideshow, our activities and practices were changing and together with a new commission, we decided that it was time for a new name and identity. LAB has now become The Institute for Boundary Interactions, and while the LAB activities and ethos will continue to be a part of what we’re doing, we will also be concentrating on our own creative output.

So, the big new is that we have been awarded a commission by the Broadway Digital Innovation initiative Making Future Work, to undertake our proposed project, Urban Immune System Research. Here’s the opening blurb from our proposal:

Urban Immune System Research [UISR] is a critical design project exploring parallel futures in the emergence of the ‘smart-city’ and the appropriation of humans as data-agents in urban systems.

“Our cities are non-living and yet our cities are growing and we are covering every square inch of this planet which means that we are going to engineer things that we can live with, that give us some value and purpose.”
– Andrew Hessel – The Internet of Living Things

LAB will design, prototype and produce a range of speculative future technologies that will take form as a mixture of wearable and portable devices, mixing consumer electronics, couture fashion, ecological systems and organic components.  These will constitute strategies for creating more fluid interactions between humans, technology and both the built and natural environment .

Re-imagining the urban environment as a multi-cellular living organism, what might threaten or support the health of the city?  If the economy is our digestive tract what is it trying to feed?  If communications networks are the nervous system what is the city trying to feel?  How might we design more sustainable and healthy futures by modelling our cities, peripheral devices and interactions with technology on ecological systems?

We’re really excited about this project as it gives us the chance to further develop our collective practice, and to explore an area of research that we feel has real potential.

Our new site is currently under development, but there’ll be much more information and regular updates on the project available soon, and we’ll be blogging our activities and progress on the Making Future Work site too. There will also be a more comprehensive archive of all the activity from the Sideshow Open Laboratory residency.

Check out the MFW site soon to see the other commissions – there’s going to be some great work coming out.

Active Ingredient: A Conversation Between Trees

Data Visualisation

[image from Active Ingredient]

Over the next few months I’m going to be working with Active Ingredient to develop a dynamic sculpture for their project A Conversation Between Trees. The project combines environmental data gathered from trees in Nottingham’s Sherwood Forest and Brazil’s Mata Atlantica. This data from each location is then visualized (above image) side by side to illustrate the contrast between the two environments, and represent a form of conversation.

Here’s what they say about the project:

“Welcome to a forest that spans time and location… a journey from the temperate north to the tropical south to discover the invisible forces at play, to reveal a story of 150 years of climate and environmental change.”

A Conversation Between Trees connects trees in different environments, using sensors connected to mobile phones to visualize and interpret the sensor data, as part of a new locative artwork. Following on from research developed as part of the Dark Forest Project.

Active Ingredient will work with environmental sensors including CO2, temperature and humidity, to interpret the carbon cycle and the sensitivity of this process to climate change (carbon cycle feedback).  This will take place in forest and woodland environments in a series of locations.

The sculpture will join the work they have already developed for the up-coming series of exhibitions, and will offer an alternative to the data shown through the visualisations, giving a physical and accumulative representation of the contrasting environmental conditions, and the significance of the changing climates over time.

Human sensor data maps

[image from Active Ingredient]

Showing environmental data in ways that are meaningful to people, but still true it’s complexity, is extremely problematic. To find compelling ways of doing this Active Ingredient have undertaken several local community based exercises to map and visualise environmental data and the longer-term affects of climate change. One of these was a workshop with school children in Brazil [images above], where the children created data maps using felt to depict the environmental conditions as they perceived them:

As objects, data maps, they are quite beautiful, the colours, layout and style (to use the language of Robin Active Ingredient’s programmer) were simple yet evocative representions of the data they collected as ‘human sensors’.

I really like these data maps, and although they are highly personal representations, I think the intuition involved in making them and their highly evocative illustration also describes part of our approach to designing the installation. Simply, the idea to create something that changes and evolves to show abstract data, and the tensions and dialogues at work within, in compelling ways that people can immediately, and tangibly make sense of.

I’ll post more as things develop, but have a look through Active Ingredient’s website for more information and regular updates, including the exhibition dates and locations.

Project LiloRann


[Original Photograph © Anurag Agnihotri]

Back in the summer I spent a great 3 months or so doing an internship with Superflux where I worked on a few new projects that they were starting up, some of which I will continue to be involved in over the coming months.

One of those projects, LiloRann, has just recently been launched. Here’s the elevator pitch:

Could we reverse ecosystem degradation by growing organic structures from unruly, invasive plants?
This is just one of the many possibilities Project LiloRann will explore in the deserts of North Gujarat, India; an area that exemplifies some of the greatest challenges posed by climate change, while being rich with the potential for ecological regeneration and resilience.

Rather than focusing too heavily on outcomes and final products, the project will instigate and maintain a set of processes that enable the combination of local knowledge and more advanced technological practices, such as bio engineering, to tackle the effects of desertification in locally sustainable ways. To do this the project will operate on two levels. Firstly, it’s an ecology project that aims to help local communities in the Gujarat region of Northern India build sustainable resilience against ecosystem degradation, and to see tangible benefit as a result.

Achieving this with any level of success requires an approach that is sensitive to, and takes full advantage of the knowledge, expertise and ability of local communities. So, secondly, the project will create the opportunity for collaborative, interdisciplinary knowledge sharing.

This test-bed for experimentation and collaboration between a unique, interdisciplinary team and local citizens aims to find ways of addressing the global issues of environmental degradation by empowering communities to take on the effects of such changes at a local level. Ultimately, it is our hope that by sharing knowledge in this way, those most at risk from climate change can be better equipped to counter its effects.

Rather than a top down imposition of expertise, the project will aim to create the conditions for emergent forms of new knowledge and ecological practices to be developed through collaborative experiments between members of the project team, local farmers, ecologists, and anyone else who’s interested.  By monitoring and documenting this process, the team hope to derive a framework for how such projects might be conducted more efficiently and sustainably in future. While interest in collaboration to engender emergent practice has been around for a while, it is still something very difficult to  achieve, especially when the project requires the combination of very disparate sets of knowledge. The hope is that these difficulties can be somewhat overcome by working within a very focused region, allowing new strategies for effective knowledge sharing to be generalized from the examples provided during the project, while still  seeing real, tangible results in the ecology of the region.

It’s only just beginning, so its difficult to say too much about it yet, but I think it’s an exciting project and I can’t wait to see what happens next. As well as more detail about the projects aims and approaches,  the LiloRann site has a lot of information, which will including updates as the project progresses and details about how potential sponsors and collaborators can get involved.

Oh, and some other projects that I worked on with Superflux are also under way – I’ll post more here about them as and when.

The Worlds Largest Photobooth

projection

Recently LAB consulted on the production of The worlds Largest Photobooth which has just opened at the Nottingham Contemporary. It’s a free-to-have-a-go interactive installation to promote the current Diane Arbus exhibition. It looks like people are having a lot of fun with it, so if you’re in Nottingham get down there and have a go!

When you sit on the stool a web cam behind the 2 way mirror detects your face and tells a Canon DSLR (also behind the mirror) to take 4 shots of you. These photos are then processed and compiled into a Polaroid style set of 4 images, projected onto the 30 ft wall behind the booth (as shown in the image above), and uploaded to Flickr. The photos are given a title, description and added to a photoset in Nottingham Contemporary’s photostream.

For the most part the booth’s custom software is open source. All the face detection, camera controll and image processing was done by Brendan Oliver (who’s got a more detailed write-up on his blog) using Openframeworks, along with some Flash, PHP and the Canon SDK. A few of us from LAB were involved in implementing the booth’s Arduino controlled flash and an automatic Flickr uploader written in processing.

photobooth

Generating Contextual Narratives

Generating Contextual Narratives: Test_01 from Mark Selby on Vimeo.

Generating Contextual Narratives is a project, made in collaboration with Mike Golembewski, about exploring ways of generating more experientially and contextually appropriate narratives. The broad concept here is that current technological trajectories suggest a future where all data is captured indiscriminately and profusely, and so it will become harder and harder to engage with records of experiences in personally meaningful ways. Rather than ‘total capture’, the recording of everyday experiences might be tied more closely into the enactment of those experiences through the objects that we use to do so. The resulting data (photos, texts, sounds etc) are contextually specific to the events that they depict, allowing for more meaningful narratives of those events to be constructed and consequently, enable more meaningful encounters with memories of experience in the future.

Bicyclopse (working title) is the first (rough)  prototype in a series of devices that investigate how we might use technologies to achieve this. It’s a camera made with an arduino controlled iPhone running a custom application mounted on the front of a bike. The iPhone’s camera is triggered by a tone sent fron the arduino everytime a reed switch attached to the bikes fork is closed by a magnet on the front wheel.  This means that one photograph is taken for every revolution of the front wheel.

These still photographs are then compiled to make a film. Visual and temporal distortions of the video narrative are determined by the function of the bike – as the bike speeds up, the rate of capture increases and so the footage appears to slow down.  Visual distortions occur when the bike turns a corner or is ridden over a rough patch of road. This is caused by the quick movement of the camera, and the  way that the iPhone camera’s CCD is scanned from side to side (See Wikipedia for explanation). In combination, these effects give a point of view specific to the bike and the way in which it is ridden.

Camera Explora at Territorial Play

camera explora_territorial play

Camera Explora recently appeared at Radiator’s Territorial Play , the opening event of their Tracing Mobility programme. Unfortunately I didn’t get much chance to publicize this fact in the run up to the event as I was too busy trying to work out what kind of string would give the most friction on a rubber pulley.
Embroidery cotton is quite good.

There were two main elements – activity and installation. The activity bit involved people going out and exploring the city using the camera, which is now a repackaged Google G1 phone running a custom made Android application. That bit was programmed by Sam Meek, who’s done a great job in spite of the somewhat … ‘limited’ hardware.

Those that took part seemed to respond well to the experience. A few said that they found it frustrating at first to be so constrained in what they could take photos of, but eventually began to resist the urge to photograph the first thing they came across and took the time to have a proper look around first.

camera case prototype

The second part – the installation part – was an arduino controlled CNC plotter (hence the business with the string) that drew lines onto a paper map of the city between  the locations where each photograph was taken, as they were being taken. Each photo represents, in theory, something the photographer found interesting or noteworthy. Physically connecting these instances on a paper map ties them all together. It links them in memory and space, as well as providing a tangible, non-photographic mnemonic of those experiences.*

The aesthetic of the plotter is quite rough. Although it’s absolutely a work in progress this was, for the most part, intentional – because it was an installation rather than a product design I wanted it to look like the kind of eccentric, unrefined, but very personally engaging and valuable machine that someone might have built for themselves. The details of that were worked out by just building as much of it as possible out of stuff that I had lying around. Whether or not that was the best strategy is up for debate.

plotter closeup

The projects is about exploring new places, so one concern leading into the event was that because most of the participants would be from Nottingham, the intended experience might be somewhat diluted. However, even those that were familiar with the city enjoyed actively seeking out things that they might not have seen or noticed before, which certainly seems to suggest more attentive exploration of the city. Some even requested to keep the photos they had taken, as well as the route map that had been drawn, when they returned. It’s nice when things like this come out in testing.

Anyway, not an especially in depth write-up just yet – think I’d need to run it again to do that. There were also a few minor technical issues that we couldn’t iron out in the time available. So although things didn’t run quite so smoothly as we would have liked, it helped us see exactly what was and wasn’t right about the prototype both technically and in terms of the design. There’s nothing that can’t be fixed.

Not bad for a first go. Fun too – it’s always good to see people using and enjoying something that you’ve made.


* This is not to say that tangible things are necessarily, or inherently any more or less valuable than digital things. One of the aims of the project is to investigate ways of generating meaningful records of experiences, and the play between digital and physical things is just one way of looking at how to do that.

Introducing LAB!

Shortly after moving up to Nottingham I began working with Mat Trivett and Mike Golembewski to set up an open, collaborative collective of creative practitioners in Nottingham. We called it LAB, and we’ve just confirmed our first in a series of workshops based around open source prototyping and collaborative practice:

LAB #1: Processing Workshop – March 16th. 7-10 pm at The Orange Tree (Map).

This’ll be an intro to the Processing development enviroment and programming language. If you’re interested in coming along, you’ll need to sign up, and the link to the registration form can be found in the event post. The reason for this is so that we can get an idea of who’s coming and tailor the workshops appropriately.

We’re in the process of confirming dates for the next two sessions, but the second will be an Arduino workshop, and the third will be a putting-what-you’ve-learned-into-practice project workshop. If you are interested in coming along to either of these you can sign up to the LAB mailing list and we’ll keep you posted, or just keep an eye on the blog for further announcements. Alternatively, if you’re interested in working with us, do feel free to get in touch.

Also, it’s worth mentioning that we won’t be entirely focussed on digital or electronic things. That’s just what we’re kicking of with, so stand by for crocheting and ice-sculpting workshops. Or something.

Exciting times!

Also, it’s worth mentioning that we won’t be entirely focussed on digital or electronic things. That’s just what we’re kicking of with, so stand by for crocheting and ice-sculpting workshops. Or something.

Living Dangerously: Earthquake Data

The other day, as a way of avoiding work that I really should be doing, I made a few basic but necessary improvements to my earthquake RSS feed reading code. I haven’t touched it for ages, but sometimes it’s good to come back to these things after a little while away.
First I tweaked it so that the programme only displays the earthquake data and writes it to the serial port (for arduino) when there is new activity – before it was sending the data every time it checked, which is not helpful. So now we only get new data if there’s a new earthquake.

So you can see from the screen grab that the magnitude of the most recent earthquake was 59 (well, 5.9 really), while the depth was 50 km. In the Processing IDE’s text area at the bottom  it says ” no new activity”, meaning that the data displayed is from the last earthquake.

Next I added a magnitude threshold so it only picks up earthquakes above a certain magnitude. This is mostly because small earthquakes and tremors are surprisingly frequent, sometimes occurring every few seconds. Now the programme only parses a new one roughly every 10 minutes.

“Waiting…” means that the last new earthquake that occurred wasn’t big enough for us to bother with. The idea is more to respond to noteworthy occurences,  on the website the RSS feed is from they distinguish between earthquakes above 2.5, and those of 5.0 and over. However, earthquakes over 5.0 are still surprisingly common, so next I think I’ll look into the classification system a bit more and work out suitable a magnitude threshold.

This works nicely because  the plan is to have the arduino do something physical in response to earthquakes, but I don’t really want that to happen too often, so adjusting the magnitude threshold changes how often  data is sent to the arduino. Well, roughly anyway, but thats a good thing.

The XML feed also contains the date and location of the earthquake so I’ll be doing something with those next, probably nothing too exciting – sometimes it’s just nice to have the meta data.

Next the arduino code also needs doing, not to mention the design and implementation of the hardware and output. At this rate, it’s going to take ages.

Anyway, nothing to exciting, just checkin’ in really.

Introducing Technology Heirlooms

Richard Banks has written a blog post explaining a bit about ‘Technology Heirlooms’ and some of the research he and others at MSRC, are conducting into it. It’s quite a difficult subject to explain – at least for me anyway – but this is a great introduction.

It’s good stuff – very interesting and I’d say, very important.

Here it is.