I know I’ve been away for a while, but you’ll no doubt be glad to know I have a selection of interesting stuff lined up (which I’ve been meaning to write up for ages) as I put together my new portfolio site.
First up, everyone loves a bit of glitch art. I’m a particular fan of GlitchBot myself. These pictures came about as a result of me mucking about with masking a photo using gaussian distributions. This is broadly the result I was going for:
Thanks to a strange edge case, where an alpha version of Processing 2.0, the crappy Intel integrated graphics on my laptop and not calling background() during the draw() loop collided, I got stuff like this:
Well, the end of year show has come and gone, and all that remains is the write up. Here’s a quick run down of the work that I showed and some of the development that went into it. I’ll also show the code I
cobbled together from other peoples’ code wrote to do it. If you’ve not seen it already, you might want to take a look at the first and second posts that show the earlier stages. Done? Onwards!
I shot some updated footage at the right resolution for my St Enoch project from two different points of view. In retrospect, shooting at 1920×1080 was probably excessive for my needs, and can cause extra problems (e.g. I don’t have a big enough monitor, resizing stuff on the fly in Processing is non-trivial, and it takes longer to process), so the results here are 1280×720. The ultimate goal is to make some large (A1-ish) prints which will probably be from PDFs anyway.
Went out to grab some better footage for my St Enoch Square project, but thanks to a hilarious(!) mix up with camera resolutions didn’t get quite what I was after. Tried some more experiments with it anyway, since it has a more static background (and is therefore easier to pick out movement against).
Sunflow is an open source ray tracing renderer which can produce some astonishing results in the right hands. Someone far cleverer than me wrote a Java wrapper for it (the catchily titled SunflowAPIAPI), and another did a tutorial about getting it talking nicely to Processing, which I relied on heavily in getting this working. There is also a Processing library by the same author (the even catchier P5SunflowAPIAPI) but thus far I’ve not been able to get it to do what I want.
Amnon’s post goes into a bit of detail about getting Sunflow APIAPI reading complex geometry from Processing using ToxicLibs- this was my first time using ToxicLibs but it was relatively straightforward. I wrote a simple class to generate some semi-random geometry using ToxicLibs’ TriangleMesh and a couple of lines of code in that prepare it to be passed to Sunflow. In the main sketch I put all the Sunflow calls (setting up the lights, shaders, camera, etc.) in one function which can be triggered by a keypress. This means the sketch is mostly the same as it would be without Sunflow, and can use the OpenGL renderer to view the scene before raytracing- the sketch and the rendering are almost totally separated. I’m not sure if that is possible with the P5SunflowAPIAPI library, or with more complex geometry.
So, to my results…
Hello! Just a quick update: following on from my last post I’ve refined the code a bit, letting me run multiple searches from one sketch. Here’s the same three searches from last time, compiled into one image. In this case, Tony is green, Gordon is red and Dave is blue.
Code will be forthcoming once I’ve refined it a bit more. Adiós!
Following on from my nine images and one video, the next part of the brief was to use Processing to create a response to the same nine words. I’ve included the code for each, as per the brief, although WordPress unfortunately mangles Processing’s nice auto formatting. It also makes this post very long, but you can skip through all the code sections if you’re so inclined. Without further ado…
Following on from my previous posts about the nine words we’re using as inspiration, I thought I’d show some of the ideas I’ve been playing with in the third phase of the project, which is using Processing to create images. The images here demonstrate some of the things I really like about Processing, like the way that an idea can be reworked quickly and easily to create images which can be interesting visually and conceptually.
All the images in this post are linked with the single word ‘serendipity’. Inspired by the works of people like Ben Fry and Jer Thorp, who work on large scale data visualisations, I decided to plug in some results from the UK’s National Lottery and see what I could come up with. All the results were taken from this archive.
Here are some images I created for a college assignment based on nine words: ambiguity, diaphanous, condition, crash, ephemeral, loop, sequential, serendipity and utopia. All were inspired by examples from an archived Department of Transport book called “Know Your Traffic Signs”, and use a version of the Transport font from CBRD.co.uk. No, I didn’t know there was a website dedicated to cataloguing roads either.
I hit upon the idea of road signs while I was looking for a consistent thread to tie these ideas together; the meanings of all nine words can have so many interpretations I had to find some way to link each one without being completely literal about it. The brief suggested that the words were related to the artistic process; I guess you could see that as a journey of sorts, and these are markers to guide your path. Maybe it just appealed to my sense of humour. In any case, it is a strong and recognisable visual identity; it’s ubiquity (to people in the UK at least) makes it ideal for a spot of pastiche.