A collaboration with Tom Chambers of Random Quark
These videos are generated by a neural network trained on images of the Bayeux Tapestry.The process of changing a trained network’s input parameters and observing the output is often described as exploring a space. In that sense these animations are looping paths within that space.
Extending the Bayeux Tapestry is a task that has been attempted before by human minds far more informed than any of the machine learning models we experimented with. The tapestry has been repeatedly revived and claimed and drafted in to bolster various causes at different points in European history (and so it continues), but the human tendency to identify with one side or the other isn’t always helpful.
Algorithms can generate new imagery naively, on a purely visual level, indifferent to the features we’d see as being loaded with meaning or constituting narrative, and unthinking when it comes to context. We thought it was possible that the machine’s distance from the subject matter could give it some apparent insight – generating images of meaningless conflict without heroes or villains, justice or honour.
The entire tapestry constitutes a very small dataset in machine learning terms. Our trained network suffers from “overfitting” – rather than demonstrate a general understanding it tends to generate images that can be traced back to particular scenes in the original. However, it still produces some interesting intermediate states in the space between those scenes.
This video is the second full loop I’ve shot with the Copernican Camera, a motorised camera mount I made to bring an non-geocentric perspective to an everyday context.
The first time it was in Scotland following the stars, this time it was tracking the sun from a pub in North London called The Happy Man.
In May, just after shooting and editing the second loop, I showed both videos opposite each other, as well as the machine used to make them, as part of an exhibition called Cosmic Perspectives at Ugly Duck gallery in Bermondsey.
I’ve written a little about the machine and the ideas behind it here: Copernican Camera Version 2
The black circular frame occupies the same plane as an image of itself – the conflict between the two surfaces causes a glitch called z-fighting or stitching which changes as they move relative to the camera, generating complex patterns.
This was made with Processing, and if you want to play with the code I’ve posted an earlier version on OpenProcessing here:
To create these GIFs I split my original code into two versions. I rendered the glitches at a lower resolution and framerate, making the pixel-scale patterns more apparent. Then I scaled them back up and combined them with a smoother, higher resolution version of just the circles-within-circles without the z-fighting.
These are plots on paper of some of my algorithmic drawings, made using a customised 1980s pen plotter. The plotter draws as the program is run, and there’s a lot of randomness in there, so they turn out differently each time. The teapot will still be a teapot, but its edges and texture will shift, so these are somewhere between editions and originals. I’ve included two plots of the cone here to illustrate the variation.
This 3D printed sculpture shows a two dimensional clock face, with time represented as the third spatial dimension. The structure is fixed but a moving laser highlights a particular cross section of time. This version is just 15cm long, and represents 40 seconds. At this scale, a 12 hour version would be 162 metres long.
I was building on the idea from an earlier work: 2D + t: 10:00 AM
The GIF on the right is a quick fast-forward scan of the shape. The video at the bottom is a brief clip at 1 second per second, with sound.
This is how the Copernican Camera looks in its most recent version. There’s a spirit level and a compass built in, so it can be lined up with the celestial pole. The angle of the bend is set to 51 degrees for London’s latitude.
The idea behind the project was to bring an non-geocentric perspective to an everyday context. It’s also about closing the gap between “knowing” the fact “the earth rotates” and actually building it into one’s understanding.
There’s this story about Ludwig Wittgenstein, originally told by Elizabeth Anscombe:
He once greeted me with the question ‘Why do people say that it was natural to think that the Sun went around the Earth rather than that the Earth turned on its axis?’ I replied ‘I suppose because it looked like the Sun went around the Earth.’ ‘Well’ he asked ‘what would it have looked like if it had looked as if the Earth turned on its axis?’
This project takes that question at face value and offers an answer. Anscombe explains that Wittgenstein was actually making a point about language and the unsupported assumptions behind a phrase like “looks as if”. There are a lot of things that might be going on there, including the physical sensations that an individual has previously experienced while moving. The Copernican Camera takes just one aspect, the fixed visual frame of reference, and fixes it to the sun or the stars rather than the earth.
I’m interested in whether it’s possible, in reality, to see the earth “as” rotating, in the same way that we talk about seeing an ambiguous drawing like the duck-rabbit “as” one thing or another, and what else might follow from that. Stewart Brand, with his campaign for a photo of the whole earth in the 1960s, and Carl Sagan, with his “Pale Blue Dot” in the 1990s, both had idealistic visions of what an image of the earth from an astronomical perspective could mean. Both offered more of a top-down god-like view than the image sequences that the Copernican Camera captures.
I do think that images of the earth from space probably have contributed to a shift in attitudes, particularly when it comes to environmentalism, but I don’t think it’s easy to integrate scientific knowledge into everyday experience. I might say that I “know” that the sun is 93 million miles away but I’m just reciting a number. I don’t know it in the sense that I know my house is about halfway down my street, and I’m not sure what it would mean if I did.
The first video produced with the Copernican Camera is here.
I made this mount for a camera to compensate for the rotation of the earth, in an attempt to provide a non-geocentric perspective. The 4 second video loop below was produced from one day’s images.
The stars remain the fixed point of reference while the earth moves, as seen in the detail image from one frame. These motion-blurred frames were condensed from the original day’s sequence of still images, which was eight times as many, using slowmoVideo.
This rotation was shot near Dumfries in South West Scotland.