You can see the effect in action around the 2:35 mark.
Adam contacted me a few months ago about using the program, and I made some updates to it, adding things like a functioning interface(!) and support for different image sizes(!!). I also refactored the Voronoi/Delaunay code to use ToxicLibs which made it much more stable, and updating it for Processing 2.0 made a big increase in speed. I’m hoping to have an opportunity to develop it further at some point- using shaders for some if the image processing should make it faster- and a proper file loader and preset system would make it much more useable.
It’s really great to see what someone with a bit more artistic vision can do with tools that you’ve made, so thanks to Adam for the opportunity and the great work.
I know I’ve been away for a while, but you’ll no doubt be glad to know I have a selection of interesting stuff lined up (which I’ve been meaning to write up for ages) as I put together my new portfolio site.
First up, everyone loves a bit of glitch art. I’m a particular fan of GlitchBot myself. These pictures came about as a result of me mucking about with masking a photo using gaussian distributions. This is broadly the result I was going for:
Thanks to a strange edge case, where an alpha version of Processing 2.0, the crappy Intel integrated graphics on my laptop and not calling background() during the draw() loop collided, I got stuff like this: Continue reading →
Well, the end of year show has come and gone, and all that remains is the write up. Here’s a quick run down of the work that I showed and some of the development that went into it. I’ll also show the code I cobbled together from other peoples’ code wrote to do it. If you’ve not seen it already, you might want to take a look at the first and second posts that show the earlier stages. Done? Onwards! Continue reading →
I shot some updated footage at the right resolution for my St Enoch project from two different points of view. In retrospect, shooting at 1920×1080 was probably excessive for my needs, and can cause extra problems (e.g. I don’t have a big enough monitor, resizing stuff on the fly in Processing is non-trivial, and it takes longer to process), so the results here are 1280×720. The ultimate goal is to make some large (A1-ish) prints which will probably be from PDFs anyway. Continue reading →
Went out to grab some better footage for my St Enoch Square project, but thanks to a hilarious(!) mix up with camera resolutions didn’t get quite what I was after. Tried some more experiments with it anyway, since it has a more static background (and is therefore easier to pick out movement against). Continue reading →
Here’s another approach to isolating movement in video- using slit-scanning. The code for this was a quick adaptation from the Processing slit-scan example with a couple of alterations and a little variation. Without further ado… Continue reading →
As part of the final unit on my course, we’ve been given a general brief to create a piece based on or in St Enoch Square, one of the larger public spaces in the centre of Glasgow. I have decided to focus on the movement of people through the square, and see if I can create some sort of “data-driven” piece using Processing.
Here is a video showing some of the development work I’ve been doing, using some footage from a previous project. Continue reading →
All that’s really going on here is the RGB/HSB values of each pixel of an image are mapped to XYZ coordinates, while the camera rotates round the centre point. Changing the mode from RGB to HSB creates a different shape from the same collection of pixels, while the low opacity and OpenGL blending create a nice glowing effect. It’s interesting to see the connections between shades in an image- almost always a continuous spectrum without large gaps. Continue reading →
Sunflow is an open source ray tracing renderer which can produce some astonishing results in the right hands. Someone far cleverer than me wrote a Java wrapper for it (the catchily titled SunflowAPIAPI), and another did a tutorial about getting it talking nicely to Processing, which I relied on heavily in getting this working. There is also a Processing library by the same author (the even catchier P5SunflowAPIAPI) but thus far I’ve not been able to get it to do what I want.
Amnon’s post goes into a bit of detail about getting Sunflow APIAPI reading complex geometry from Processing using ToxicLibs- this was my first time using ToxicLibs but it was relatively straightforward. I wrote a simple class to generate some semi-random geometry using ToxicLibs’ TriangleMesh and a couple of lines of code in that prepare it to be passed to Sunflow. In the main sketch I put all the Sunflow calls (setting up the lights, shaders, camera, etc.) in one function which can be triggered by a keypress. This means the sketch is mostly the same as it would be without Sunflow, and can use the OpenGL renderer to view the scene before raytracing- the sketch and the rendering are almost totally separated. I’m not sure if that is possible with the P5SunflowAPIAPI library, or with more complex geometry.
Hello, and a somewhat belated happy new year! I hope 2011 has been good to you so far. I’ve been pretty busy both with official college work and personal projects, and it’s the latter I want to show today. I put together a wee compilation of some of the sketches I’ve put together recently as a “showreel” of sorts (with one eye on interviewing for university in the immediate future). Some of these aren’t really suitable for web deployment, and doing it as video lets me crank up the detail and quality. It also gives me the opportunity to make some metal to go behind it.