Blog: [Blog Home] [Archives] [Search] [Contact]

Archive for the ‘computer art’ Category

The Pursuit of Creative Coding Failures

Sunday, October 15th, 2017

Linear Moon Lunar Representation artwork on Redbubble
Linear Moon Lunar Representation artwork on Redbubble

I write this as a creative coder dismayed by my own lack of foresight in keeping a record of some recent coding failures. It was only a week ago that I wrote an article about glitch art – Glitch Art or Not Glitch Art. You would think that with having just written about deliberately capitalizing on failure that I would be more attentive to my own coding failures. But alas no.

I’ve used the artwork titled Linear Moon shown above to illustrate this story. I created this art using a brand new program I had just finished writing. I knew that I’d written a similar program in the past but did not have the patience to go looking for it (yes, my hard drives are just that cluttered – even with files being organized by directory). Instead I decided that starting fresh would be the best way to go.

My early versions of this new program featured some mathematical logic mistakes with respect to what I wanted to accomplish. If I had been wiser I would have kept these mistakes for later evaluation with respect to their artistic merit. But no, I was in hot pursuit of the right program – the program that would generate a picture that matched the one in my head. It was only when my internal visualization of what I wanted to achieve matched what I saw on the screen that I ceased twiddling with my code and began experimenting with different parameter values to create Linear Moon.

Abstract From Line Segments Algorithmic Art Fail
Abstract From Line Segments Algorithmic Art Fail

The good and the bad about every run of the program was that the final step always wrote its results to a file so I had a visual record of every failed image. The good was in being able to go back and look over these image fails. The bad was in seeing that a number of them had artistic value and knowing that I had failed to keep a copy of the version of the program that produced that image. One example of an early failure is Abstract From Line Segments shown above and created from a painted version of The Beatles Abbey Road album cover art.

In contrast, the correct version of that same input image is shown below and accurately reflects the look I was going for. Between the two images were a number of program variations where I experimented with my program’s math and logic. These variations produced a range of visual results.

Beatles Abbey Road Album Cover Art Successful Algorithmic Interpretation
A successful interpretation of a painting of The Beatles Abbey Road album cover art

After the challenge of successfully creating the linear/line segment effect that I wanted, adding a coloring option was fairly straight forward. The only challenges associated with adding color were those of sampling and manipulation. An example of an initial color experiment is shown below using a portrait of SpaceX CEO Elon Musk.

Elon Musk Algorithmic Portrait
Elon Musk Algorithmic Portrait, color version

There is one big difference between a program that works correctly and a program that leads to erroneous results: it is quite easy to recreate a program that works correctly but exceedingly difficult to recreate a specific set of errors.

My advice to all creative coders out there is this: slow down a little bit, take a look at your failures, and ask yourself “is this an error worth keeping?”

About Linear Moon Algorithmic Art

Linear Moon is the first work of art I’ve formally created using my new program. The original is 30 by 30 inches printed at 300 ppi (pixels per inch). To provide a better idea of what the image looks like at actual size, below is an excerpt that features Tycho Crater. Note that its size on your device screen will vary due to the different pixel densities of different screens.

Tycho Crater detail from Linear Moon Algorithmic Art
Tycho Crater actual size detail from Linear Moon Algorithmic Art

While I have not yet added Linear Moon to my web site, I have made it available as merchandise on Redbubble

Linear Moon artwork on Redbubble

Bookmark it:  Stumble It  Bookmark this on Delicious  Digg This  Technorati  Reddit Tweet It

Creative Coding Software Tools: Processing, openFrameworks, Cinder

Thursday, April 14th, 2016

Creative Coding Software Tools:
Creative Coding Software Tools: Processing, openFrameworks, Cinder

In my previous blog post, Fresh Brewed Coffee Digital Art, I made mention of the fact that I create my digital art using software of my own design and that for those digital artists interested in pursuing this aspect of digital art creation, there were some alternative tools available. In that post I mentioned Processing, openFrameworks, and Cinder. I would like to take this opportunity to say a little more about each of these three options.


Starting with Processing, this is a framework and programming language that is built on top of Java, an object-oriented programming language. Like Java, Processing is free and available on a variety of platforms. Personally I use Processing on both Windows 7 and Ubuntu Linux. Because the Processing programming language was created for artists and musicians with little or no programming background, beginners can quickly be up, running, and creating with this wonderfully flexible software tool. The flexibility of Processing as an environment for creative coding is expanded by the abundance of third party libraries that have been made available. It is also the most flexible tool in terms of the variety of platforms it works with. I have taken advantage of the ability to write Processing sketches for the web using the Javascript version of Processing (Processing.js) as well as for creating Android apps and for interacting with the Arduino (see The Arduino Starter Kit – Official Kit from Arduino with 170-page Arduino Projects Book). For those new to programming and creative coding, Processing is my number one recommendation.

Processing Resources

The main Processing web sites are:

Following are three books on Processing that I recommend and own. There are a number of other books on Processing that are also quite good. Please be aware that Processing is now on version 3 and version 2 is still widely used but do avoid any book that was written for version 1 of Processing.


Like Processing, openFrameworks is also free and available on multiple platforms. In fact I even had the opportunity to write some openFrameworks programs on a Raspberry Pi (see CanaKit Raspberry Pi 3 Ultimate Starter Kit – 32 GB Edition) that was running the Raspbian operating system. The primary difference between Processing and openFrameworks is that whereas Processing is a framework that sits on top of the Java programming language, openFrameworks sits on top of the C++ programming language. Personally I find openFrameworks to be somewhat more challenging than Processing, particularly with respect to the use of off-frame buffers in conjunction with OpenGL. And by challenging, I am speaking in terms of the number of lines of code I must write in order to achieve some objective.

openFrameworks Resources

The main web sites for openFrameworks are:

There are not nearly as many books about openFrameworks as there are about Processing but the two that are most worthwhile are:

If you are searching on Amazon for books about Processing and/or openFrameworks, you may come across the book Programming Interactivity: A Designer’s Guide to Processing, Arduino, and openFrameworks by Joshua Noble. My advise is do not buy this book. It is quite out of date and the source code for the examples never was made available.


Cinder is a third creative coding platform and, like openFrameworks, relies on the C++ programming language. I have no personal experience with Cinder but I will say that when I was investigating openFrameworks vs Cinder as a creative coding toolset for the C++ environment, openFrameworks won out.

Cinder Resources

The main Cinder web sites are:

There are even fewer books about Cinder than there are about openFrameworks. Two books you will find on Amazon are:

I hope you’ve found this information useful. I also hope that, even if you are not a digital artist or musician or programmer, you check one or more of these creative coding toolsets because you never know – you just might have a knack for creative coding.

Bookmark it:  Stumble It  Bookmark this on Delicious  Digg This  Technorati  Reddit Tweet It

Algorithmic Abstract Art Orientation

Tuesday, October 20th, 2015

Tunnel Vision Algorithmic Abstract Art
Tunnel Vision Algorithmic Abstract Art

Over the last several days I’ve created a number of new works of algorithmic art. One of these pieces is Tunnel Vision – shown above. After creating this particular artwork I began to wonder if the orientation I had used in its creation would actually be the orientation that other people would find to be the most aesthetically appealing. To get an idea of what that answer might be I posted the image below to several art groups and asked people to identify which of the four orientations they found to be the most aesthetically pleasing.

abstract algorithmic art orientation choices
The Four Artwork Orientation Choices

While early voting had A as the overwhelming preference, by the time voting was effectively over, D had emerged as a close runner up. With respect to the two portrait oriented choices, I find it easy to see why D was clearly preferred to C as that’s the choice that I find more aesthetically pleasing. With respect to the two landscape oriented choices, option A was clearly preferred over option B. Again I agree.

Abstract Art Orientation Survey Results
Abstract Art Orientation Survey Results

Taking a step back, you can see in the survey results that there is almost a 50-50 split between people selecting a landscape orientation versus a portrait orientation. So the real challenge is choosing between options A and D with the core question being does this artwork work better as a portrait-oriented artwork or as a landscape-oriented artwork? Given the symmetry of this piece, I think the answer to this question is really one of personal taste.

Creating Tunnel Vision

In creating Tunnel Vision, I was working with a program that is a descendant of a very simple spirograph program I had written for a class I taught on using Processing to create digital spirographs and harmonographs. The image below is an example of the type of output that original spirograph program created.

Original spirograph program output
Original spirograph program output

Over a period of time I gradually enhanced and expanded that program along several separate aesthetic lines of evolution. Tunnel Vision is the result of one of those evolutionary lines.

And My Aesthetic Vote Is…

When I created Tunnel Vision, I did so with the orientation of the canvas corresponding to option A. And it was with that landscape orientation in mind that I modified various parameters to create a work that satisfied my personal aesthetic. Fortunately for me the survey results served as a confirmation of the creative choices I had made.

Open Edition Prints

Open edition prints of Tunnel Vision are available from the following art print sites:

Bookmark it:  Stumble It  Bookmark this on Delicious  Digg This  Technorati  Reddit Tweet It

Musecon Review

Monday, August 10th, 2015

Modified Spirograph program output from Musecon class
Modified Spirograph sample output from a modified Spirograph program

I spent this last weekend attending Musecon which was held at the Westin Chicago Northwest in Itasca, IL. MuseCon is a three day convention for makers, artists, musicians, and other creatives that provides a wide range of creative programming. For my part, Musecon began Friday afternoon with the class I was teaching on how to use the Processing programming language to create a digital spirograph and a digital harmonograph (for more, see Creating Digital Spirographs and Harmonographs with Processing).

The class went quite well and I was surprised by the number of students I had since my class was in the first block of programming – which was Friday at 1:30pm. I can’t complain about the scheduling of the class since I was the one who selected that time slot. Getting my programming done at the very start of the convention meant that I had a worry-free weekend to attend the other programs that interested me without having to carry around the electronic baggage needed for the class. This is the third year that I’ve had the opportunity to participate as a presenter in Musecon’s programming lineup and it was nice having completed my part within the first hours of the convention. If you want to read about what I did last year, check out Generative Art plus Instagram and Pinterest at Musecon.

I spent the rest of the weekend attending programming and chatting with folks I only see maybe once or twice a year. With respect to the programming I attended, my top three favorite programs were:

  • God’s Mechanics: The Religious Life of Techies
  • Physical Properties of Meteorites
  • Photography: Champagne lighting on a grape juice budget

This year the convention had as Guest of Honor Brother Guy Consolmagno. In addition to having his PhD in Planetary Science and having authored a number of excellent books, Brother Guy recently won the Carl Sagan Medal and is now President of the Vatican Observatory Foundation.

The program God’s Mechanics: The Religious Life of Techies was a presentation by Brother Guy about the subject of his book God’s Mechanics: How Scientists and Engineers Make Sense of Religion – which is a fascinating look at how "techies" look at and think about religion and deal with the question of God’s existence.

Musecon Guest of Honor Brother Guy Consolmagno talking about Meteorites
Musecon Guest of Honor Brother Guy Consolmagno talking about Meteorites

My second favorite program was also a presentation by Brother Guy. Physical Properties of Meteorites was an interesting look at the history of meteorites in terms of human understanding of how the solar system works. Brother Guy also discussed some of his own research and its relevance to the larger field of study. Once upon a time my interest in meteorites was keener than it is today – particular since I served as an officer and director of the Planetary Studies Foundation, which at the time had one of the top meteorite collections in the world. The overwhelming bulk of that collection had been received as a donation from the DuPont family. It was in those years that I once had the opportunity to be on a panel about meteorites with Brother Guy at a science fiction convention – though I no longer recall which one it was.

Lastly my third favorite program of the weekend was Photography: Champagne lighting on a grape juice budget which was led by Richard France, Ken Beach, Bruce Medic – all of whom are really excellent photographers whose work I admire. The theme of their program was about taking a DIY (do it yourself) approach to coming up with alternative lighting and equipment solutions. Think in terms of retasking old items or using as substitutes items that could be purchased from your local hardware store.

In closing, Musecon 2015 was a totally enjoyable weekend and one I look forward to repeating in 2016.

God’s Mechanics: How Scientists and Engineers Make Sense
of Religion

Bookmark it:  Stumble It  Bookmark this on Delicious  Digg This  Technorati  Reddit Tweet It

Image Processing and Telling RGB/HSB Color Lies

Thursday, May 8th, 2014

Squashed Blue Man Statue
Squashed version of Blue Man Statue digital painting for testing

As a practitioner of digital art and image processing, and with a background in both math and computer programming, I regularly create my own graphics programs using the Processing programming language. Pictured at the top of this post is a squashed version of a digital painting I did using Adobe Photoshop and some custom brushes I had created. Pretty straight forward stuff.

Recently I’ve been exploring the world of generative art creation by writing my own generative art programs. For some of these programs, rather than starting with a blank canvas I provide an initial image from which to work. The image may be a photograph or a work of digital art. For example in one instance I took a selfie (a self-portrait photograph), created a painted version of that photograph and fed that into one of my generative art programs. (Note: you can see the resulting artwork Generative Selfie at

Unfortunately with large size images complex generative programs can take quite a while to run. Consequently I use whatever tricks I know to speed up program execution. One common ‘trick’ is to avoid using Processing’s canned color routines and to use bit-shifting instead. Bit-shifting allows for very speedy access to an image’s color information which is encoded in an RGB (red,blue,green) format. This means that color is defined by the three values of red, green, and blue. Bit-shifting works because the four individual values for red, green, blue, and alpha (transparency), are all stored in a single 32-bit integer field.

The other night I thought of a cool modification that I could make to one particular generative art program I’ve been working on. However that change would require that I work in HSB (aka HSV) mode. With HSB/HSV, a color is defined by the three values of hue, saturation, and brightness (sometimes referred to as value). Working programmatically in RGB has several drawbacks when compared to the competing HSB color model. HSB provides much more flexibility when it comes to creatively manipulating color.

There is just one problem with the HSB approach. The color information in images is stored in RGB format. The bit-shifting method that works so nicely is not an option for working with HSB. There are standard routines that allow you to extract HSB information from the RGB color format but you pay a penalty in terms of the amount of processing time it takes to do that. And if you are working with an image that has tens of millions of pixels and you are performing a lot of color sampling, let’s just say that your computer is going to be tied up for a while. My back of the envelope calculation leads me to believe that working with HSB would result in an additional 50 million-plus program statement executions in my code and an unknown number of additional statement executions in the underlying Processing and Java code.

By nature I’m an impatient person so for me all this additional program overhead was unacceptable. And then it dawned on me – I could LIE! You see computers are stupid and will believe whatever you tell them. As supporting evidence I offer up the views of science fiction author Arthur C. Clarke:

…the fact is that all present computers are mechanical morons. They can not really think. They can only do things to which they are programmed.

The LIE that came to me was to write a Processing program that would take all the RGB color information from an image file and replace it with HSB information. I could then use that modified version of the image file as input to my HSB generative art program and it would run just as fast as the original RGB version because I would be able to use those very efficient bit-shifting operations. While I was at it I also wrote a utility that converted the file from HSB back to RGB. This allowed me to visually compare the original image with an image after it had undergone the RGB to HSB and back to RGB conversions.

Of course the downside of stuffing HSB data into the RGB field is that every other program on my or anyone else’s computer is going to read that image file and expect that the color information is in RGB format. Take a look at Image 2 below. It’s a copy of the file shown above except I’ve put HSB information into the RGB field. Kind of cool.

Appearance to RGB-reading software
Image 2. How the image looks to RGB-reading software when the file actually contains HSB information.

Taking this whole lying idea a step further, what if I lie to my color converting utility? What if I do the same RGB-to-HSB conversion multiple times while throwing in a few HSB-to-RGB conversions as well? What you can wind up with is one confused picture. Image 3 is an example of the kind of image you can get. In fact you could argue that Image 3 is more artistic than the original painting.

multiple random RGB-to-HSB and HSB-to-RGB conversions
Image 3. Running multiple, random RGB-to-HSB and HSB-to-RGB conversions.

Pablo Picasso once observed that art is a lie that makes us realize truth. That may be but in this case a lie was simply the most expedient way to achieve an artistic objective. Having spent all this time coming up with a nice RGB-to-HSB color conversion utility, it’s now time to get to work on the HSB version of that generative art program.


For those of you who would like to know more about RGB, HSB, and Processing, you can check out the following references.

Bookmark it:  Stumble It  Bookmark this on Delicious  Digg This  Technorati  Reddit Tweet It

Generative Art and the Mona Lisa Meme

Friday, March 28th, 2014

Generative Art and the Mona Lisa
Generative Art from the Mona Lisa
left to right: Generation 3922, 5826, and 8187

I want to share with you the results of a recent experiment of mine using a creative process known as generative art. Personally I find that the most interesting aspect of generative art is in being surprised by the artwork that a system produces. Generative systems can produce artistic outcomes not anticipated by the programmer/artist and the image above is one such example. On the left is an image of the Mona Lisa as it appears after 3,922 generations of the generative art program I wrote. On the far right is the same image after 8,187 generations.

What is Generative Art?

For the purposes of my discussion here I’ll rely on an excerpt from the Wikipedia definition of generative art:

Generative art refers to art that in whole or in part has been created with the use of an autonomous system. An autonomous system in this context is generally one that is non-human and can independently determine features of an artwork that would otherwise require decisions made directly by the artist… Generative Art is often used to refer to computer generated artwork that is algorithmically determined.

Source: Wikipedia definition of Generative Art

Why are you picking on the Mona Lisa?

When testing out various programs that rely on the use of a source image for input it is quite useful to have a single standard image to use. That makes it much easier to compare the workings of different programs. An analogy is that of Playboy centerfold and Swedish model Lena Söderberg. Lena was the centerfold for the November 1972 issue of Playboy magazine. Her centerfold photograph was first used as a test image for image processing experiments in the summer of 1973 at the USC Signal and Image Processing Institute (SIPI). Subsequently this photograph became a standard source image for the testing of image processing algorithms. In explaining the decision for the use of this image, David C. Munson, editor-in-chief of IEEE Transactions on Image Processing, had this to say:

"First, the image contains a nice mixture of detail, flat regions, shading, and texture that do a good job of testing various image processing algorithms. It is a good test image! Second, the Lena image is a picture of an attractive woman. It is not surprising that the (mostly male) image processing research community gravitated toward an image that they found attractive."

My test image of choice is Leonardo da Vinci’s painting Mona Lisa (La Gioconda in Italian). Because this painting is so well known and accessible, it makes it easier for people to "see" the results of an manipulation, distortion, or derivation of the original.

My Oscillating Generators

The generative art program that I wrote, which produced the illustrations at the head of this post, relies on a system of generators. You can think of each generator as simply being an independently functioning paintbrush.

In this particular run, I used 9200 generators (paintbrushes). Each generator (brush) has several characteristics: size, location, color, opacity, movement. In creating this system my idea was that each paintbrush would hover in the vicinity of its original location without straying too far. However, I did not provide a rule to enforce this behavior. Rather I left each brush free to go its own way.

To govern direction and speed I used a Perlin noise function that on the face of it was balanced. By balanced I mean that the system should have had no preferential direction. I was very much surprised at the results (shown above) from one of the several rule sets I had created.

For simplicity, each generator is unaware of the other generators in the system. For the next generation of this system, I plan on creating interacting generators. In such a system, when two generators encounter one another, they will react and/or interact. For example each could share with the other some or all of its characteristics. Each of these characteristics can be thought of as genetic material that can be shared.

So that you can better see the detailed progression of the system, I’m providing a large (1600 x 1600) image that shows the same subsection of the artwork as it progresses through generations. The leftmost section is from generation 3922, the middle section is from generation 5826, and the rightmost is from generation 8187.

Open image in new window – Generative Art Mona Lisa Triptych

For other examples of how the image of Mona Lisa has been used, check out the Processing artwork tagged with Mona Lisa at

Bookmark it:  Stumble It  Bookmark this on Delicious  Digg This  Technorati  Reddit Tweet It