Blog: [Blog Home] [Archives] [Search] [Contact]

Archive for the ‘computer art’ Category

Image Processing and Telling RGB/HSB Color Lies

Thursday, May 8th, 2014

Squashed Blue Man Statue
Squashed version of Blue Man Statue digital painting for testing

As a practitioner of digital art and image processing, and with a background in both math and computer programming, I regularly create my own graphics programs using the Processing programming language. Pictured at the top of this post is a squashed version of a digital painting I did using Adobe Photoshop and some custom brushes I had created. Pretty straight forward stuff.

Recently I’ve been exploring the world of generative art creation by writing my own generative art programs. For some of these programs, rather than starting with a blank canvas I provide an initial image from which to work. The image may be a photograph or a work of digital art. For example in one instance I took a selfie (a self-portrait photograph), created a painted version of that photograph and fed that into one of my generative art programs. (Note: you can see the resulting artwork Generative Selfie at

Unfortunately with large size images complex generative programs can take quite a while to run. Consequently I use whatever tricks I know to speed up program execution. One common ‘trick’ is to avoid using Processing’s canned color routines and to use bit-shifting instead. Bit-shifting allows for very speedy access to an image’s color information which is encoded in an RGB (red,blue,green) format. This means that color is defined by the three values of red, green, and blue. Bit-shifting works because the four individual values for red, green, blue, and alpha (transparency), are all stored in a single 32-bit integer field.

The other night I thought of a cool modification that I could make to one particular generative art program I’ve been working on. However that change would require that I work in HSB (aka HSV) mode. With HSB/HSV, a color is defined by the three values of hue, saturation, and brightness (sometimes referred to as value). Working programmatically in RGB has several drawbacks when compared to the competing HSB color model. HSB provides much more flexibility when it comes to creatively manipulating color.

There is just one problem with the HSB approach. The color information in images is stored in RGB format. The bit-shifting method that works so nicely is not an option for working with HSB. There are standard routines that allow you to extract HSB information from the RGB color format but you pay a penalty in terms of the amount of processing time it takes to do that. And if you are working with an image that has tens of millions of pixels and you are performing a lot of color sampling, let’s just say that your computer is going to be tied up for a while. My back of the envelope calculation leads me to believe that working with HSB would result in an additional 50 million-plus program statement executions in my code and an unknown number of additional statement executions in the underlying Processing and Java code.

By nature I’m an impatient person so for me all this additional program overhead was unacceptable. And then it dawned on me – I could LIE! You see computers are stupid and will believe whatever you tell them. As supporting evidence I offer up the views of science fiction author Arthur C. Clarke:

…the fact is that all present computers are mechanical morons. They can not really think. They can only do things to which they are programmed.

The LIE that came to me was to write a Processing program that would take all the RGB color information from an image file and replace it with HSB information. I could then use that modified version of the image file as input to my HSB generative art program and it would run just as fast as the original RGB version because I would be able to use those very efficient bit-shifting operations. While I was at it I also wrote a utility that converted the file from HSB back to RGB. This allowed me to visually compare the original image with an image after it had undergone the RGB to HSB and back to RGB conversions.

Of course the downside of stuffing HSB data into the RGB field is that every other program on my or anyone else’s computer is going to read that image file and expect that the color information is in RGB format. Take a look at Image 2 below. It’s a copy of the file shown above except I’ve put HSB information into the RGB field. Kind of cool.

Appearance to RGB-reading software
Image 2. How the image looks to RGB-reading software when the file actually contains HSB information.

Taking this whole lying idea a step further, what if I lie to my color converting utility? What if I do the same RGB-to-HSB conversion multiple times while throwing in a few HSB-to-RGB conversions as well? What you can wind up with is one confused picture. Image 3 is an example of the kind of image you can get. In fact you could argue that Image 3 is more artistic than the original painting.

multiple random RGB-to-HSB and HSB-to-RGB conversions
Image 3. Running multiple, random RGB-to-HSB and HSB-to-RGB conversions.

Pablo Picasso once observed that art is a lie that makes us realize truth. That may be but in this case a lie was simply the most expedient way to achieve an artistic objective. Having spent all this time coming up with a nice RGB-to-HSB color conversion utility, it’s now time to get to work on the HSB version of that generative art program.


For those of you who would like to know more about RGB, HSB, and Processing, you can check out the following references.

Bookmark it:  Stumble It  Bookmark this on Delicious  Digg This  Technorati  Reddit Tweet It

Generative Art and the Mona Lisa Meme

Friday, March 28th, 2014

Generative Art and the Mona Lisa
Generative Art from the Mona Lisa
left to right: Generation 3922, 5826, and 8187

I want to share with you the results of a recent experiment of mine using a creative process known as generative art. Personally I find that the most interesting aspect of generative art is in being surprised by the artwork that a system produces. Generative systems can produce artistic outcomes not anticipated by the programmer/artist and the image above is one such example. On the left is an image of the Mona Lisa as it appears after 3,922 generations of the generative art program I wrote. On the far right is the same image after 8,187 generations.

What is Generative Art?

For the purposes of my discussion here I’ll rely on an excerpt from the Wikipedia definition of generative art:

Generative art refers to art that in whole or in part has been created with the use of an autonomous system. An autonomous system in this context is generally one that is non-human and can independently determine features of an artwork that would otherwise require decisions made directly by the artist… Generative Art is often used to refer to computer generated artwork that is algorithmically determined.

Source: Wikipedia definition of Generative Art

Why are you picking on the Mona Lisa?

When testing out various programs that rely on the use of a source image for input it is quite useful to have a single standard image to use. That makes it much easier to compare the workings of different programs. An analogy is that of Playboy centerfold and Swedish model Lena Söderberg. Lena was the centerfold for the November 1972 issue of Playboy magazine. Her centerfold photograph was first used as a test image for image processing experiments in the summer of 1973 at the USC Signal and Image Processing Institute (SIPI). Subsequently this photograph became a standard source image for the testing of image processing algorithms. In explaining the decision for the use of this image, David C. Munson, editor-in-chief of IEEE Transactions on Image Processing, had this to say:

"First, the image contains a nice mixture of detail, flat regions, shading, and texture that do a good job of testing various image processing algorithms. It is a good test image! Second, the Lena image is a picture of an attractive woman. It is not surprising that the (mostly male) image processing research community gravitated toward an image that they found attractive."

My test image of choice is Leonardo da Vinci’s painting Mona Lisa (La Gioconda in Italian). Because this painting is so well known and accessible, it makes it easier for people to "see" the results of an manipulation, distortion, or derivation of the original.

My Oscillating Generators

The generative art program that I wrote, which produced the illustrations at the head of this post, relies on a system of generators. You can think of each generator as simply being an independently functioning paintbrush.

In this particular run, I used 9200 generators (paintbrushes). Each generator (brush) has several characteristics: size, location, color, opacity, movement. In creating this system my idea was that each paintbrush would hover in the vicinity of its original location without straying too far. However, I did not provide a rule to enforce this behavior. Rather I left each brush free to go its own way.

To govern direction and speed I used a Perlin noise function that on the face of it was balanced. By balanced I mean that the system should have had no preferential direction. I was very much surprised at the results (shown above) from one of the several rule sets I had created.

For simplicity, each generator is unaware of the other generators in the system. For the next generation of this system, I plan on creating interacting generators. In such a system, when two generators encounter one another, they will react and/or interact. For example each could share with the other some or all of its characteristics. Each of these characteristics can be thought of as genetic material that can be shared.

So that you can better see the detailed progression of the system, I’m providing a large (1600 x 1600) image that shows the same subsection of the artwork as it progresses through generations. The leftmost section is from generation 3922, the middle section is from generation 5826, and the rightmost is from generation 8187.

Open image in new window – Generative Art Mona Lisa Triptych

For other examples of how the image of Mona Lisa has been used, check out the Processing artwork tagged with Mona Lisa at

Bookmark it:  Stumble It  Bookmark this on Delicious  Digg This  Technorati  Reddit Tweet It

Swimming Eye Video

Monday, December 16th, 2013

Swimming Eye Video
Swimming Eye Video

I’ve just completed a video project titled Swimming Eye. This was yet another accidental project on my part as I was not planning on creating a video. Rather I was experimenting with using Processing to create an algorithmic painting program.

In experimenting with applying Perlin noise to a gridded particle field to create a large algorithmic paintbrush, I was struck my the nature of the ensuing motion. It was similar to that of a liquid surface in motion. The impression it made on me was that of a living painting: it wasn’t a static image but an image that had a life of its own.

My original idea of creating some rather unusual digital paintings using this methodology was replaced with the idea of creating a video. The image used as illustration above is representative of my original idea. It was created by stacking several individual movie frames together in Photoshop and using different layer blend modes to merge the individual images together.

Previously I wrote about using Windows Live Movie Maker to create a YouTube video (see Portrait Art Video Project). However I found that Movie Maker was not capable of turning jpeg images into a real movie. With Movie Maker, an image must remain on display for at least one second. This is fine if you want to use images to create a video slide show. However, it does not work when it comes to creating an animation. To translate my 1400 images into a movie, I wanted each image (frame) to display for 1/30th of a second (think 30 frames per second).

I tried using Avidemux but it crashed repeatedly. In searching I came across FFMPEG – a DOS command line utility. It worked. With the basic video created my next step was to come up with a soundtrack because I really didn’t want to create a silent movie.

Searching, I located a public domain song that met my needs (thanks to Free Sound Collective for making their music available for use). I used Audacity to create a sound clip of the necessary time length. I used Movie Maker to add the mp3 to the video created by FFMPEG.

I hope you enjoy the show.

Don’t forget – art is in the eye of the beholder.

See the Swimming Eye Video on YouTube

Bookmark it:  Stumble It  Bookmark this on Delicious  Digg This  Technorati  Reddit Tweet It

Mobile Processing Conference at UIC Innovation Center

Thursday, October 31st, 2013

Mobile Processing Conference, UIC Innovation Center
Mobile Processing Conference, UIC Innovation Center

Tuesday night I returned from a 9 day trip to Arizona and tomorrow morning I’ll be heading out to attend the Mobile Processing Conference being held at the University of Illinois – Chicago Innovation Center. The Mobile Processing Conference runs November 1 – 3 from 10:30am to 5:30pm. It’s rare for events like this to be held here in Chicago so I’m indeed fortunate to be in a position to attend. From the web site:

The 2013 conference features artists, digital humanities scholars, and software developers in a series of presentations, panels, and workshops… The event is free and open to the public.

Note that there is a single track of programming. With one program per time period, I’ll be able to attend every program offered. Following is the scheduled programming.

Title: Seeing Sound
Description: Workshop on how to build a sound visualization system and "discuss why it’s completely useless."
Presenter: Lucas Kuzma
Comment Having written my own sound visualization programs and as a presenter on the subject (see Live Art – Interactive Audio Visualizations) I am very anxious to hear Mr. Kuzma’s take on the subject.
Title: Do You Need A CS Degree To Build Apps?
Description: A presentation dealing with whether or not a college degree in computer science is really necessary to be a successful software designer and programmer.
Presenter: Brandon Passley
Comment I will say this: in terms of the knowledge of specific programming languages I obtained in college, I’ve used none of that knowledge since graduating. The skills I learned while in college that have served me well are those associated with how to go about writing a program and how, in general, programming languages work. Mindset and experience is really what I gained from my college computer science classes. FYI – I received my masters in computer science
and was just one class shy of also qualifying for a bachelors degree in computer science.
Title: Breaking Barriers With Sound
Description: A presentation by Stanford Professor and Smule Co-founder Ge Wang about computer music, mobile music, laptop orchestras, and apps.
Presenter: Ge Wang
Comment Another must see for me – especially since I am current enrolled in the Coursera class Introduction to Programming for Musicians and Digital Artists in which I am learning to use the ChucK programming language to create electronic music. In fact one of the instructors for the class is Ge Wang, the creator of the ChucK programming language.
Title: Off-The-Grid: Create Peer-To-Peer Collaborative Networks
Description: A discussion on collaboration using peer-to-peer wireless networks (WiFi, Bluetooth, and NFC technologies) with the Ketai library for Processing
Presenter: Daniel Sauter/Jesus Duran
Comment This is a new topic area for me.
Title: Drawing Machines
Description: A workshop on Processing coding techniques for creating customized "drawing machines".
Presenter: JD Pirtle
Comment Another one I must attend as I have used Processing extensively creating quite a few of my own drawing machines.
Title: The Technology Landscape For Women And Issues Of Gender
Description: A panel about women in computing and why there are a smaller proportion of women in the field today than there were in the 80s.
Presenter: Amanda Cox, Marie Hicks, Lisa Yun Lee
Comment I must say I’m curious as to where these ladies are coming from and what they’ll have to say on the subject. According to a Wikipedia article on women in computing, In the United States, the number of women represented in undergraduate computer science education and the white-collar information technology workforce peaked in the mid-1980s, and has declined ever since. In 1984, 37.1% of Computer Science degrees were awarded to women; the percentage dropped to 29.9% in 1989-1990, and 26.7% in 1997-1998. Of course percentages can be deceiving. Left unanswered is the
percentage of the female population so engaged in the 80s vs today. Also from the same article:
A study of over 7000 high school students in Vancouver, Canada showed that the degree of interest in the field of computer science for teenage girls is comparably lower than that of teenage boys. The same effect is seen in higher education; for instance, only 4% of female college freshmen expressed intention to major in computer science in the US. I am curious to here how this issue is addressed.
Title: Seeing Sound
Description: A workshop for developing sonic visualizations including various methods for converting audio into images using openFrameworks.
Presenter: Lucas Kuzma
Comment From the description: Participants are expected to have a working copy of Xcode, as well as well as working knowledge of C++. Oops, Xcode is the IDE (Integrated Development Environment) for the Apple OS. I’ve played with openFrameworks before but found it to be more code heavy than Processing due to OpenGL issues. Unfortunately I do not currently have an appropriate IDE installed for openFrameworks on my Windows laptop.
Title: Fast And Slow: Mobile Aesthetics And Civil Liberties
Description: Described as a discussion on how to empower a new generation of makers to participate in shaping the technological artifacts that shape us socially and culturally.
Presenter: Daniel Sauter
Comment This could go either way – we’ll see what happens.
Title: Sketching The News
Description: A look at some data visualization projects at the New York Times.
Presenter: Amanda Cox
Comment Another subject area in which I have interest and have done some work.
Title: Processing Shaders, The Sunday Sessions
Description: A workshop about GLSL shaders in Processing 2.0 with the main objectives being to present advanced applications of the shader API, specifically post-processing image filters and blending, procedural generation of 3D objects using fragment shaders, iterative effects with the pframe buffer, and shading of large-scale geometries.
Presenter: Andres Colubri
Comment Major changes were made between the Processing and versions. Most significant was the move towards OpenGL integration. This caused me some real headaches as Processing 2 just wouldn’t work properly on my computer. However upgrading my graphics card drivers did solve the problem (though it took some doing and hurdle jumping to accomplish). For more, see Shaders in Processing 2.0.
Title: Creative Coding on the Raspberry Pi with openFrameworks
Description: Like the title says, Creative Coding on the Raspberry Pi with openFrameworks. Raspberry Pi Hardware will be provided for use during the workshop. Participants are encouraged to bring a laptop.
Presenter: Christopher Baker
Comment If you have never of it, you can find out all about Raspberry Pi here and read the Raspberry Pi FAQ.

The Mobile Processing Conference is being held at the UIC Innovation Center located at 1240 W Harrison St in Chicago, IL. For information about the conference, visit the Mobile Processing Conference web site

Bookmark it:  Stumble It  Bookmark this on Delicious  Digg This  Technorati  Reddit Tweet It

OpenSCAD, 3D Objects, and 3D Printing

Tuesday, August 20th, 2013

OpenSCAD 3D Object as Art
OpenSCAD 3D Object as Art

I recently joined the Workshop 88#1 Google group after attending one of their meetings. In going through some of the group discussions, I came across one regarding 3D printing and the choice of software that people made in order to create 3D objects for printing. One of the software selections mentioned was OpenSCAD, an open source product that I have been aware of but never used.

The software is described on the OpenSCAD web site as follows:

OpenSCAD is software for creating solid 3D CAD models. It is free software and available for Linux/UNIX, Windows and Mac OS X. Unlike most free software for creating 3D models (such as Blender) it does not focus on the artistic aspects of 3D modelling but instead on the CAD aspects.

OpenSCAD is not an interactive modeller. Instead it is something like a 3D-compiler that reads in a script file that describes the object and renders the 3D model from this script file.

The power, and weakness, of OpenSCAD is its use of a programming language (script file) to build models. This is in contrast to a traditional 3D modeling program digital artists use, like Lightwave, that supports an interactive mouse-driven style of object creation.

The OpenSCAD user interface is pretty straight forward. Of course that is because the work of creating 3D objects is done via coding. One of the most common complaint about traditional 3D programs is the complexity of the user interface – which makes sense when you consider the variety and complexity of the operations users are performing interactively.

With OpenSCAD, I was able to create simple 3D objects fairly quickly by getting a handle on the scripting language’s syntax. In fact I have illustrated this article using a 3D model I created. The object is composed exclusively of cylinders on which I executed a series of translations and rotations. I should point out that for the illustration of the model I used a Photoshop adjustment layer to alter the hue of the image as rendered in OpenSCAD and used a Photoshop layer style to add a drop shadow to the image.

I do own Adobe Photoshop Extended. Photoshop Extended is the version of Photoshop that supports working in 3D with 3D objects. OpenSCAD saves 3D objects in the STL (Standard Tessellation Language – for more see the Wikipedia STL entry) format. Unfortunately STL is not a 3D format that Photoshop Extended CS4 recognizes. Also unfortunately the selection of 3D file formats that CS4 supports is extremely limited. Surprisingly neither the CS5 or CS6 upgrades have added support for any additional 3D file types. That means that if I want to work with the 3D objects created by OpenSCAD, I will either have to use software other than Photoshop or I will have to use an intermediary program to convert the STL file into one of the very few formats Photoshop recognizes. My preference is to not use Photoshop.

Will I Use OpenSCAD?

There are a plethora of 3D programs available today. Some, like OpenSCAD, are designed for the CAD market. Most aren’t. However, the explosion of 3D printing has generated new interest in CAD programs – especially within the hacker and maker community. For my part, I expect that I will continue to explore OpenSCAD and will attempt to find opportunities to make use of it. I must confess that I do find the programmatic nature of the 3D object creation process appealing.

Note #1: Workshop 88

Workshop 88 hackerspace in Glen Ellyn
Workshop 88 hackerspace in Glen Ellyn

Located in Glen Ellyn IL, Workshop 88 is a hackerspace – also referred to as a makerspace. On their web site, Workshop 88 is described as being focused on science, technology, mechanics, culture and the digital arts and offering a space where people with diverse backgrounds can socialize, collaborate and learn. For more, see the Workshop 88 web site. While my principle interest in investigating the group is to potentially teach a Processing class for them, my secondary interest is in learning more about 3D printing. Given that they have a 3D printer and I don’t, this provides an excellent opportunity to learn more about that aspect of digital creativity.


If you are interested in learning more about OpenSCAD and 3D model making, you may want to check out the following.

On the subject of models, I’ll leave you with a quote from John von Neumann: The sciences do not try to explain, they hardly even try to interpret, they mainly make models. By a model is meant a mathematical construct which, with the addition of certain verbal interpretations, describes observed phenomena. The justification of such a mathematical construct is solely and precisely that it is expected to work.

I’ll close by recommending that if you are interested in creating 3D models then give OpenSCAD a try – it’s free so you’ve got nothing to lose.

Bookmark it:  Stumble It  Bookmark this on Delicious  Digg This  Technorati  Reddit Tweet It

Paper + Salt Solution + Electricity = Art

Tuesday, August 6th, 2013

Paper + Salt Solution + Electricity = Art
Paper+Salt Solution+Electricity=Art

I was at Musecon this last weekend – both as an attendee and as a presenter. For more, see Musecon 2013 Creatives Convention. One of the programs I attended was Stand Back, I’m Going To Try Science! taught by Todd Johnson. One of the demonstrations Todd gave was of brushing a salt water solution on matboard in a path that connected two electrodes. He then turned on the electricity and the audience watched as a fractal-like path was burned into the paper by the electric current. Todd was able to exercise some control over the process by adjusting the amount of current being delivered and by using water to cool down hot spots. I found the pattern and coloring this process created to be fascinating. The photograph used to illustrate this article (above) is of one of Todd’s creations. Todd is perhaps best known for using a particle accelerator to zap acrylic blocks with millions of volts of electricity and then freeing the trapped electrons thus creating Lichtenberg figures.

As I watched Todd creating these fascinating figures it occurred to me to try and create an algorithmic art version of what I was seeing. The solution that immediately came to mind was to use a diffusion limited aggregation (DLA) algorithm. Diffusion Limited Aggregation (DLA) is an algorithm for the simulation of the formation of structures by particles that are diffusing (moving) in some medium, in this case the medium being the surface of a virtual canvas. Such algorithms can be simple or complex. For example, in a more sophisticated DLA algorithm the particles can be made to interact with one another as they move. With respect to the medium, gravity and/or currents can be introduced to further influence the particle’s behavior. The rules by which particles create structure can also be defined in various ways with the precise nature of the resulting structure being governed by the full interplay of the system’s rules and parameters.

I used the Processing programming language to implement my idea. The basic operation of the program allowed for:

  • the creation and destruction of particles;
  • variable opacity for the structure with opacity determined by the number of times a point was "hit" by a particle;
  • the ability for the user to create particle emitters to inject new particles into the system and to control where those particles appear;
  • the ability to globally modify particle velocity (note that velocity is a vector having both speed and direction);
  • the ability for the user to create seeding points for the structure by drawing with the mouse.

The picture below is the first image created after completing program debugging – which surprisingly was less of a hair-pulling experience than I expected it to be. Note that I used Adobe Photoshop to add a layer style that added a border stroke, a drop shadow, and a color overlay to enhance the appearance of the structure.

Diffusion Limited Aggregation algorithmic art example
Diffusion Limited Aggregation algorithmic art example

As you can see, the structure created using my DLA algorithm is much more bushy than the structure created by Todd. One key difference between our two methods is that Todd’s creation process builds structure from the inside-out while the DLA process I used builds structure from the outside-in.

To see other examples of images created using diffusion limited aggregation, I suggest doing an image search using the term "diffusion limited aggregation". You will see a high degree of sameness to images created using this technique.

And the algorithm’s future…

I’m uncertain at this point as to what further development work I will do on this program. There are algorithmic alternatives to the diffusion limited aggregation approach I used. There are also many modifications I could make to my DLA implementation that would alter its behavior.

In closing I’ll leave you with this thought. Speaking as a programmer it is said that Ideas are cheap. Code isn’t. Speaking as a digital artist I can say Code is cheap. Ideas aren’t. The truth of either statement really depends on your perspective.

Bookmark it:  Stumble It  Bookmark this on Delicious  Digg This  Technorati  Reddit Tweet It