Sunflow is an open source ray tracing renderer which can produce some astonishing results in the right hands. Someone far cleverer than me wrote a Java wrapper for it (the catchily titled SunflowAPIAPI), and another did a tutorial about getting it talking nicely to Processing, which I relied on heavily in getting this working. There is also a Processing library by the same author (the even catchier P5SunflowAPIAPI) but thus far I’ve not been able to get it to do what I want.
Amnon’s post goes into a bit of detail about getting Sunflow APIAPI reading complex geometry from Processing using ToxicLibs- this was my first time using ToxicLibs but it was relatively straightforward. I wrote a simple class to generate some semi-random geometry using ToxicLibs’ TriangleMesh and a couple of lines of code in that prepare it to be passed to Sunflow. In the main sketch I put all the Sunflow calls (setting up the lights, shaders, camera, etc.) in one function which can be triggered by a keypress. This means the sketch is mostly the same as it would be without Sunflow, and can use the OpenGL renderer to view the scene before raytracing- the sketch and the rendering are almost totally separated. I’m not sure if that is possible with the P5SunflowAPIAPI library, or with more complex geometry.
Hello, and a somewhat belated happy new year! I hope 2011 has been good to you so far. I’ve been pretty busy both with official college work and personal projects, and it’s the latter I want to show today. I put together a wee compilation of some of the sketches I’ve put together recently as a “showreel” of sorts (with one eye on interviewing for university in the immediate future). Some of these aren’t really suitable for web deployment, and doing it as video lets me crank up the detail and quality. It also gives me the opportunity to make some metal to go behind it.
Here’s a quick snapshot of how this is developing. This searches the Guardian’s Open Platform API for mentions of everyone’s favourite whistleblowing website. The bars map the number of articles on a monthly basis, where 12 o’clock is January. You can see a small peak in April, when the Collateral Murder video was released, bigger peaks in July and October as the Afghanistan and Iraq logs are published and a massive spike in December as “Cablegate” (oh, how I loathe the use of ‘gate’ as a suffix for anything mildly controversial!) gets going. The article headlines are arranged by date order, but on a uniform scale. This is still work in progress, but I’m quite pleased with how it’s shaping up so far.
Get in touch with any comments, criticisms or questions!
Hello! Just a quick update: following on from my last post I’ve refined the code a bit, letting me run multiple searches from one sketch. Here’s the same three searches from last time, compiled into one image. In this case, Tony is green, Gordon is red and Dave is blue.
Code will be forthcoming once I’ve refined it a bit more. Adiós!
Inspired by this article from the awesome Jer “Blprnt” Thorp, I’ve been experimenting with the Guardian’s Open Platform API, which gives access to ten years worth of articles in XML or JSON format. You have to sign up for an API key but it’s free and easy. I thought I’d put up some of the early tests I’ve been doing with it. I’ve never worked with XML before so it’s been something of a learning experience! Continue reading →
OK, now it’s time for the final instalment of the Nine Words saga that has been ongoing for a while. This time, the brief was to create three interactive pieces using Flash, triggered by words chosen from the nine. My AS3 programming is not very advanced so I’ve not been able to get as conceptual as I did with the Processing pieces, but so it goes. All three rely to varying degrees on the rather nice Hype Framework, which simplifies some aspects of AS3 to let you get going a bit more easily. Click on the pictures to play with the pieces.
I wanted to do something with proper found sound, so I got all the whirring bits of tech I could find and pointed a mic at them. As far as I recall, there are three cameras, a printer and my laptop’s DVD drive whirring away. I also wanted to try something with some dynamics to it, which was moderately successful. I quite like the driving rhythm, anyway.
Following on from my nine images and one video, the next part of the brief was to use Processing to create a response to the same nine words. I’ve included the code for each, as per the brief, although WordPress unfortunately mangles Processing’s nice auto formatting. It also makes this post very long, but you can skip through all the code sections if you’re so inclined. Without further ado…
Following on from my previous posts about the nine words we’re using as inspiration, I thought I’d show some of the ideas I’ve been playing with in the third phase of the project, which is using Processing to create images. The images here demonstrate some of the things I really like about Processing, like the way that an idea can be reworked quickly and easily to create images which can be interesting visually and conceptually.
All the images in this post are linked with the single word ‘serendipity’. Inspired by the works of people like Ben Fry and Jer Thorp, who work on large scale data visualisations, I decided to plug in some results from the UK’s National Lottery and see what I could come up with. All the results were taken from this archive.
Following on from my recent Nine Images post, here’s a video equivalent. I spent a lot of time thinking up obscure ways to link the nine words (ambiguity, ephemeral, loop, serendipity, utopia, crash, condition, diaphanous, and sequential) with short videos. In the end I decided to be pretty literal with my interpretation of the words and instead to make the presentation of the videos more interesting by combining them into one visual assault. Continue reading →