Control: a 1st showing tech deep dive

Control is an interactive documentary for 20 audience members at a time made from an ongoing series of interviews with people all around the world about the future and what role we can play in it as both a species and individuals.

It is a piece full of technology in its presentation and i am writing this having just come out of our first stage of development and public showing of the work and before we embark on phase two of refining and adding to the digital elements and general shape and feel of the work.

Working with digital elements in our work is always a process of negotiation – both with ourselves and what we can up achieve with the time, money and ability we have, and also how expressive or artistically effective the tools in questions are. Tech can both hinder and make a piece, and the realities of delivering it can break all of us, so navigating this is a constant reality of the work we make.

Like a lot of our work Control is only made possible through the support of our partners who allow us the time and space to develop our ideas. The Arc in Stockton and Theatre in the Mill in Bradford have been long time supporters of us, and on this project they are joined by BAC down in London as well as he AHRC and the University of Manchester, so a huge thanks to all of them for providing invaluable space, time and resources for artists to develop.

Control is at a phase now where the initial research and interview stage is in full swing and the collected interviews and audio are excellent. The idea for the live work and the digital palette we are using have met for the first time and it is immediately apparent what the opportunities and language of our chosen medium can be.

 

For us this only really comes apparent when you put people in front of it or in it and let them press the buttons or do it and suddenly the full potential of a thing emerges. From a purely design perspective it is the first major live event we have created in a while. We wanted to allow audiences a tactile way to interact with the hours of interviews we have collected. Early on we talked about escape rooms and ways in which audiences would be able to touch and manipulate the data, about data visualisations and surrounding the audience with the content of the work.

 

20 cubes control

We arrived at the image of twenty cubes that the audience manipulate and build with in the space. This post is going to break down how we used and controlled them – what we managed to achieve so far in development and what is next for the digital elements of the work, its going tot be quite technical and may assume some knowledge about some of the tings i am talking about, so drop me a line if you have any questions.

There are three main aesthetic components to control as a live experience:

sound both collated from interviews and composed sound with our collaborator James Hamilton

video and visual – both found footage and new work created by our other collaborator Simon Wainwright projection mapped onto the cubes

– light and architectural builds made from the cubes – strucures and shapes built by the audience

These three components all had to come together from a technical perspective to be controlled and driven from a single brain for the whole work

 

 

img_3552

The whole show was run on a piece of software called Touch Designer with smaller components written in Processing and Python. The build involved 21 Raspberry-Pis, 1208 LEDS, 20 speakers, a gaming PC, 4 projectors, 20 batteries, a high speed router and other bits and pieces and custom fittings printed off our 3d printer.

 

img_3565The cubes:

The cubes are hollow white acrylic 50cmX50cm boxes. We wanted sound that could play through a PA and also on an individual cube basis. Cubes that could be both video and light textures and display and show structures and create temporary installations for audiences to explore.

We fitted each cubes wth a raspberry-pi inside equipped with a light HAT made up of a 64 LED grid. Each LED was individually addressable which meant that spread over the 20 cubes we had 1280 LEDS at our control. Using raspbery-pi 3s (that have wifi by default) we created a local network that all the cubes were connected to meaning that we could send them commands by OSC over the network with RGB values in for the set to specific colours, to flash and sequence or even to respond to sound.

 

pis

 

Running on the same machine was a small Processing sketch designed to play sound files on OSC triggers sent across the network. Getting sound running on Processing on the Pis was a bit trickier than expected as the only library that works currently is Beads, which handles samples and files in a much fussier way than i wanted and is also relatively badly documented. There was also memory issues on some of the larger files. So the plan is to re-write this is OpenFrameworks. But once this is all working the cubes become twenty networked and synched light and sound boxes for audiences to manipulate and arrange and explore.

 

The mapping and brains of the show:

 

The logic and main brains of the piece is all handled in a complex patch we created in a platform called Touch Designer.

Touch is an incredibly versatile software that can handle and talk most computer protocols and is a perfect mix of visual higher level programming but also lets you get your hands dirty and type python pretty much anywhere we’ve flirted with it before but after this project it has become my current go to for everything. In Touch we created a virtual world the size of our playing space, and used projection mapping techniques to teach the software where the projectors were and how to interpret the space. The idea here is that you teach the computer how to project back the virtual world onto the real one, so that if you place a virtual 50cmX50cm cube in the exact middle of the space on the computer if you replicate this in the physical world the two will line up, this theory can then be applied to four projectors at once and replicated over twenty cubes and as long as the physical and virtual world are the same you have a playing space mapped in 3 dimensions, with objects being hit from four angles in 360 degrees, meaning you can walk in and amongst the virtually projected world as if it was real.

 

touch designer ad cubes

 

We spent a long time finessing the projection system in warehouse in leeds. Its basically an adaptation of a technique used for mapping onto architectural or unusual objects but extended across multiple projectors and on moving objects. What Touch allowed us over other platform we tried (although its may be possible with other tools) was to create a full 3d world where each object could fully exist and move and where the entire piece could be visualised on the computer screen and then mapped back onto the real world.

Because Touch easily handles large amounts of data it was also responsible for the managing of the LEDs in the cubes and the triggering of sound over OSC. Which also meant that any more complex suff like the audio responsive signal was handled on the computer end with the Pis only receiving the light data rather than doing any audio routing or processing themselves.

In order to control and structure the whole show we created a virtual object for each cube, a ‘container’, which held all of the necessary data for each cube in the space: position in 3d space – texture projected on it – LED colour and behaviour – audio and some other scene specific bits and pieces.

The cueing of the show was then done in a sort of pseudo code language in a table (another thing that Touch is very good at) that we would then step through.

touch cues

 

So for every cue we would send a small expression to each cube container which would in turn process it and send the relevant info the cube in the real world. The structure of the expressions looked a bit like this:

b 0 1 2 3 p/255,255,255/1 p1/play.wav/1 texture.mp4

It looks complicated but is actually is relatively simple. Each character separated by a space is a separate command for the cube in question:

                     mode / rotation / posX / posY / posZ / LED sequence info /sound player info / file to play as texture info

So we would write 20 of these in a row in a table to make a cue. The piece was then structured by stepping through the lines of the tables moving and changing the cubes as appropriate. We rigged up a launchpad so we could run overrides if we needed to control cubes on an individual basis and for one off triggers than we didn’t have time to program into the full flow of the cues like turning the floor on and off.

We were also sending and receiving sound to and from the sound desk where James was running analogue synths and playing samples and composition through Ableton. The system take time to program – largely because having to repeat twenty of anything is a time expensive activity – which is one of the big changes i want to make on the system to speed up the plotting in the future with shortcuts. But this system allows us to build relatively complex states and mapping layouts with some relatively straight forward expressions and in an elegant enough way.

Next iteration of the system is to bring some liveness back into the cubes – allowing the states to be performed a bit more and respond both the audio and video content but also to the audience. We have some plans for how audiences could interact with individual boxes. OSC and the network allows us two way control from box to the brain of the projection mapping and we want to explore some basic forms of audience input or control on each cube to allow them to affect the cubes as they build them up. Also each cube is basically a small computer in a big box and we want to push at that more.

There are probably other ways to structure a set up like this and compartmentalize a bit more so you aren’t looking at the entirety of the world in one go which can be overwhelming – but the true test of whether we have built a tool is when we return to develop the next stage in a month if we can pick it up and use it straight away. I have built and coded so many things that made sense at the time in the heat of running or making an installation that can never transition to a full blown tool you can iterate on as they are so moment specific and when i return to it months down the line doesn’t make sense anymore but with this one i feel its can scale and have more features bolted on as the concept for the work develops.

We have worked out our palette and what the tool we have are now its time to push at what they can do narratively and in terms of the audience’s experience.

 

b

Recent Posts

Recent Comments

    Archives

    Categories

    Meta

    Written by:

    Comments are closed.