Blog: [Blog Home] [Archives] [Search] [Contact]

Archive for the ‘New Media Art’ Category

Fresh Brewed Coffee Digital Art

Wednesday, April 13th, 2016

Fresh Brewed Coffee Art
Fresh Brewed Coffee Digital Art

Fresh Brewed Coffee is a digital painting I completed a few days ago. Just as macro photography provides us with extreme close-up views of things, Fresh Brewed Coffee is a work of macro art in that it represents a close-up view of the bubbles on the surface of a freshly brewed cup of coffee. What I particularly like about this macro perspective is that it lends the artwork an abstract appearance. You can click the image above to see an enlarged wallpaper of this art.

Now I’ve been making coffee using a coffee press (aka French press) for years but I had never really "looked" at those bubbles that were floating around on the surface. Perhaps it was the lighting, but it was this one instance of brewing coffee that inspired me to create this particular artwork.

To create the artistic effect I wanted, I did some rewriting of one of my generative art programs. This involved modifying both basic functionality as well as the variety and scope of the parameters associated with the paint brush engine. FYI, what initially inspired me to write my own painting programs was a combination of the limitations of the Adobe Photoshop paint brush engine with a desire to create art that was unique to me – since I do not make my programs commercially available. For those digital artists who are also software savvy, I suggest checking out Processing (Java), openFrameworks (C++), or Cinder (C++).

The version of Fresh Brewed Coffee shown here is the open edition version and is available for purchase online at the following print-on-demand (POD) sites:

Fresh Brewed Coffee artwork on Redbubble

Fresh Brewed Coffee artwork on CRATED

If you are interested in a limited edition framed canvas print, which is 29 by 19 inches when printed at 300ppi, please contact me.

Here’s to starting the day with a good cup of coffee.

Bookmark it:  Stumble It  Bookmark this on Delicious  Digg This  Technorati  Reddit Tweet It


Seaside Generative Art Rip-off

Friday, August 14th, 2015

Monet Seaside Rip-off generative art
Monet Seaside Rip-off generative art
Monet Seaside Rip-off on Redbubble

Dare I say that I’ve gotten tired of looking at Mona Lisa, at least the digitized version. Just as the Playboy centerfold photograph of Lenna Söderberg became a standard image used by researchers in the field of image processing and just as the Utah Teapot became something of a standard test object for 3D graphics developers (modeling, lighting, texturing, rendering), images of the Mona Lisa have frequently been used as a test of generative art programs. I myself have created quite a few variations of the Mona Lisa – some of which may eventually see the light of day if I ever decide to make them public. But I thought it high time that I find a new work of art against which to test the generative painting programs which I greatly enjoy creating.

For this latest program I’ve been working on, I decided to make use of a painting by Claude Monet titled Morning by the Sea. This is not the first time I’ve used Monet’s work. Some time ago I created a generative art video composed of paintings by Claude Monet (read about it at The Liquified Paintings of Claude Monet).

From my perspective, with respect to all generative art programs the designer faces the challenge of balance between artist control and program freedom/flexibility. In other words, how tightly or loosely do you want to hold the reins on the program? I view the question of control versus freedom as having two components.

First there is the ability of the artist to interact with the process. An example of a large degree of artist control would be that of a digital artist using an advanced brush in Adobe Photoshop. The Photoshop brush engine has a number of parameters available that make it possible for the artist to design a brush that can vary the way in which digital paint is applied to the canvas depending upon brush speed, pressure, direction. At the other extreme is what I’ll call push-button painting. Again using Photoshop as an example, a photograph can be transformed into a non-photorealistic "painting" by simply applying one or more global filters to the photograph. A favorite exercise of mine is reverse engineering digital art that I see – not only figuring out what commercial software was used, but also determining what process was used.

The second aspect of freedom versus control is that of how the program itself is structured. Think determinism versus chaos. For example, let’s say you have the following set of statements in a program:

int red = 256/2;
int green = 256/3;
int blue = 256/4;
color theColor = color(red,green,blue);

No matter how many times the above statements are executed, the color being created will always be the same color. In other words the system is deterministic. Now consider the following statements:

int red = (x % 255);
int green = (y % 255);
int blue = ( (x+y) % 255);
color theColor = color(red,green,blue);

Even though this code will result in a multitude of colors, it is still deterministic in that for any pair of x,y values, the color generated will always be the same. Lastly, there is this:

int red = random(256);
int green = random(256);
int blue = random(256);
color theColor = color(red,green,blue);

This represents a chaotic alternative where the color created could be anything. There is no control here. Any legal color is just as likely as any other legal color to be created.

These code examples are a gross over-simplification but serve to illustrate the challenge the developer faces. At one extreme everything is a foregone conclusion while at the other extreme it’s anything goes. It is the designer’s challenge to figure out where to put the fulcrum of their generative art system.

The generative painting program used to produce the artwork for this post has not one but two hearts (just like a Time Lord). The first heart is a flowfield object that consists of two separate, internal subsidiary flowfields. You can think of these flowfields as being the physics engines that drive the bristles of the paintbrush. These flowfields serve as forces of control in the system. The second heart is the particle system – which I define as a system of brush bristles, with each bristle having its own characteristics – within limits (again that freedom vs control issue).

For testing the program, I began by creating my own version of Monet’s painting Morning By The Sea using Photoshop. The process was fairly straight forward. The lines of the painting are clear and the visual elements relatively simple. By digitally creating my own version of Monet’s seaside landscape, I am now one step removed from the original. I can also go back and modify the art to see how those modifications affect the generative process.

The next step was to use my version of the painting as the color source for my generative painting program. I must confess that the first several "paintings" I created with the program weren’t satisfactory but with each painting I would go back and modify the system.

Monet original morning by the sea vs generative seaside
left: Original Monet Morning By The Sea, right: generative version created from my modified version of Morning By The Sea

The version shown above is the first painting produced that I am sufficiently happy with to share. On the left is the actual painting and on the right is the version created using my generative painting program, which used as color input my own recreation of Monet’s Morning By The Sea so you could call this a painting of a painting of a painting. I was sufficiently pleased with the results that I decided to make it available on Redbubble.

I plan to continue to work on my program as I’m still not really happy with the brushwork that my brush bristles are producing. A thought came to me last night in bed – that being to broaden the variety of bristles that I’m currently using with a focus being on the beginning and ending of each individual brush stroke. We’ll see what happens.

About the Source Painting


The French painter Oscar-Claude Monet (1840-1926) was one of the founders of Impressionism and created a very large body of work over the course of his life. Monet completed Morning By The Sea in 1881. The image that I used as my reference source is from WikiArt.org and can be found on the WikiArt page for Claude Monet’s painting Morning By The Sea.

Bookmark it:  Stumble It  Bookmark this on Delicious  Digg This  Technorati  Reddit Tweet It


Video: The Liquified Paintings of Claude Monet

Monday, July 7th, 2014

Liquified Paintings of Claude Monet
The Liquified Paintings of Claude Monet Video

Since setting up an account on YouTube towards the end of last year, I confess to not having been active on that platform. I created the account for the purpose of publishing several video portfolios to promote my art. The plan was to create a video for each area of artistic creation I am working in. I created exactly one portfolio video and that was for my portrait art. That was my first attempt at making a video and you can see it here: Portrait Art Video. If you’re interested in the story of how I went about making that video, read Portrait Art Video Project.

The experience of creating that video got me interested in creating some original animations of my own. Since that time I’ve only posted two videos exploring animation. One I dubbed the Swimming Eye Art Video. The other was a crude quickie experiment in animating an image – Sailing A Stormy Sea Video.

For this new video I wanted to create something that would feature the art of the great impressionist painter Claude Monet. I have recently been experimenting with vector fields and their utility as an algorithmic means of creating flowing brush strokes. It occurred to me that I could use this technique to create a series of liquified paintings that would evolve. And that’s how The Liquified Paintings of Claude Monet video was born. And here it is.

The video captures the evolution of six separate paintings and the transition from one to the next. For me it is the transition between paintings that is visually the most interesting. One thing you may have noticed is the very slow evolution of the first painting. It is no coincidence that this first painting is the darkest of the six paintings. You see I tied the speed of evolution to the overall brightness of the image.

Given that these paintings have been "liquified", I deliberately chose artworks by Monet that featured water, be it a pond, a stream, a river, or the ocean. Following are image stills from the video and the name of the Monet painting that was used as the color source at that point during the video.

Claude Monet Impression, Soleil Levant
Claude Monet – Impression, Soleil Levant

Claude Monet The Argenteuil Bridge
Claude Monet – The Argenteuil Bridge

Claude Monet Morning By The Sea
Claude Monet – Morning By The Sea

Claude Monet Autumn On The Seine At Argenteuil
Claude Monet – Autumn On The Seine At Argenteuil

Claude Monet Poplars At The Epte
Claude Monet – Poplars At The Epte

Claude Monet Water Lilies
Claude Monet – Water Lilies

The most time consuming aspect of this project was writing the program that produced the video stills. In all I used 3272 image stills (not counting the title and trailer images) to create this video.

Graphics Software Used

I created this video using several different software packages. The liquified/animated images used to construct the video were created with a program I wrote using the Processing creative coding platform which is a framework built on Java. As a programming language, Processing is easily the best language for non-programmers interested in creative coding projects. To stitch the individual images together into a video, I used the DOS command line utility FFMPEG. To create my title and trailer images I used Adobe Photoshop CS4. Note that my workflow would have consisted entirely of "free" software if I had used GIMP (GNU Image Manipulation Program) to create these two images. For the soundtrack file, I used Audacity to edit the mp3 sound file.The soundtrack music is Laideronnette Imperatrice Des Pagodes by Maurice Ravel. Finally to assemble everything I used Microsoft’s Windows Live Movie Maker which came bundled with Windows 7.

In Conclusion

If you would like to know more about Claude Monet, you may want to read this biography of Claude Monet. If you are of a technical bent, there is this Wikipedia entry for vector fields which served as the painting foundation upon which my Processing program was built. And while I don’t often add new videos, you may want to follow me on YouTube.

I’ll close with a couple of noteworthy quotes.

When you go out to paint, try to forget what objects you have before you – a tree, house, a field….Merely think, here is a little square of blue, here an oblong of pink, here a streak of yellow, and paint it just as it looks to you, the exact color and shape, until it gives your own naive impression of the scene before you. – Claude Monet

A preliminary drawing for a wallpaper pattern is more highly finished than this seascape. – French art critic Louis Leroy in 1874 commenting on Monet’s Impression, Sunrise

Bookmark it:  Stumble It  Bookmark this on Delicious  Digg This  Technorati  Reddit Tweet It


Generative Art and the Mona Lisa Meme

Friday, March 28th, 2014

Generative Art and the Mona Lisa
Generative Art from the Mona Lisa
left to right: Generation 3922, 5826, and 8187

I want to share with you the results of a recent experiment of mine using a creative process known as generative art. Personally I find that the most interesting aspect of generative art is in being surprised by the artwork that a system produces. Generative systems can produce artistic outcomes not anticipated by the programmer/artist and the image above is one such example. On the left is an image of the Mona Lisa as it appears after 3,922 generations of the generative art program I wrote. On the far right is the same image after 8,187 generations.

What is Generative Art?

For the purposes of my discussion here I’ll rely on an excerpt from the Wikipedia definition of generative art:

Generative art refers to art that in whole or in part has been created with the use of an autonomous system. An autonomous system in this context is generally one that is non-human and can independently determine features of an artwork that would otherwise require decisions made directly by the artist… Generative Art is often used to refer to computer generated artwork that is algorithmically determined.

Source: Wikipedia definition of Generative Art

Why are you picking on the Mona Lisa?

When testing out various programs that rely on the use of a source image for input it is quite useful to have a single standard image to use. That makes it much easier to compare the workings of different programs. An analogy is that of Playboy centerfold and Swedish model Lena Söderberg. Lena was the centerfold for the November 1972 issue of Playboy magazine. Her centerfold photograph was first used as a test image for image processing experiments in the summer of 1973 at the USC Signal and Image Processing Institute (SIPI). Subsequently this photograph became a standard source image for the testing of image processing algorithms. In explaining the decision for the use of this image, David C. Munson, editor-in-chief of IEEE Transactions on Image Processing, had this to say:

"First, the image contains a nice mixture of detail, flat regions, shading, and texture that do a good job of testing various image processing algorithms. It is a good test image! Second, the Lena image is a picture of an attractive woman. It is not surprising that the (mostly male) image processing research community gravitated toward an image that they found attractive."

My test image of choice is Leonardo da Vinci’s painting Mona Lisa (La Gioconda in Italian). Because this painting is so well known and accessible, it makes it easier for people to "see" the results of an manipulation, distortion, or derivation of the original.

My Oscillating Generators

The generative art program that I wrote, which produced the illustrations at the head of this post, relies on a system of generators. You can think of each generator as simply being an independently functioning paintbrush.

In this particular run, I used 9200 generators (paintbrushes). Each generator (brush) has several characteristics: size, location, color, opacity, movement. In creating this system my idea was that each paintbrush would hover in the vicinity of its original location without straying too far. However, I did not provide a rule to enforce this behavior. Rather I left each brush free to go its own way.

To govern direction and speed I used a Perlin noise function that on the face of it was balanced. By balanced I mean that the system should have had no preferential direction. I was very much surprised at the results (shown above) from one of the several rule sets I had created.

For simplicity, each generator is unaware of the other generators in the system. For the next generation of this system, I plan on creating interacting generators. In such a system, when two generators encounter one another, they will react and/or interact. For example each could share with the other some or all of its characteristics. Each of these characteristics can be thought of as genetic material that can be shared.

So that you can better see the detailed progression of the system, I’m providing a large (1600 x 1600) image that shows the same subsection of the artwork as it progresses through generations. The leftmost section is from generation 3922, the middle section is from generation 5826, and the rightmost is from generation 8187.

Open image in new window – Generative Art Mona Lisa Triptych

For other examples of how the image of Mona Lisa has been used, check out the Processing artwork tagged with Mona Lisa at OpenProcessing.org

Bookmark it:  Stumble It  Bookmark this on Delicious  Digg This  Technorati  Reddit Tweet It


Swimming Eye Video

Monday, December 16th, 2013

Swimming Eye Video
Swimming Eye Video

I’ve just completed a video project titled Swimming Eye. This was yet another accidental project on my part as I was not planning on creating a video. Rather I was experimenting with using Processing to create an algorithmic painting program.

In experimenting with applying Perlin noise to a gridded particle field to create a large algorithmic paintbrush, I was struck my the nature of the ensuing motion. It was similar to that of a liquid surface in motion. The impression it made on me was that of a living painting: it wasn’t a static image but an image that had a life of its own.

My original idea of creating some rather unusual digital paintings using this methodology was replaced with the idea of creating a video. The image used as illustration above is representative of my original idea. It was created by stacking several individual movie frames together in Photoshop and using different layer blend modes to merge the individual images together.

Previously I wrote about using Windows Live Movie Maker to create a YouTube video (see Portrait Art Video Project). However I found that Movie Maker was not capable of turning jpeg images into a real movie. With Movie Maker, an image must remain on display for at least one second. This is fine if you want to use images to create a video slide show. However, it does not work when it comes to creating an animation. To translate my 1400 images into a movie, I wanted each image (frame) to display for 1/30th of a second (think 30 frames per second).

I tried using Avidemux but it crashed repeatedly. In searching I came across FFMPEG – a DOS command line utility. It worked. With the basic video created my next step was to come up with a soundtrack because I really didn’t want to create a silent movie.

Searching opsound.org, I located a public domain song that met my needs (thanks to Free Sound Collective for making their music available for use). I used Audacity to create a sound clip of the necessary time length. I used Movie Maker to add the mp3 to the video created by FFMPEG.

I hope you enjoy the show.

Don’t forget – art is in the eye of the beholder.

See the Swimming Eye Video on YouTube

Bookmark it:  Stumble It  Bookmark this on Delicious  Digg This  Technorati  Reddit Tweet It


Mobile Processing Conference at UIC Innovation Center

Thursday, October 31st, 2013

Mobile Processing Conference, UIC Innovation Center
Mobile Processing Conference, UIC Innovation Center

Tuesday night I returned from a 9 day trip to Arizona and tomorrow morning I’ll be heading out to attend the Mobile Processing Conference being held at the University of Illinois – Chicago Innovation Center. The Mobile Processing Conference runs November 1 – 3 from 10:30am to 5:30pm. It’s rare for events like this to be held here in Chicago so I’m indeed fortunate to be in a position to attend. From the web site:

The 2013 conference features artists, digital humanities scholars, and software developers in a series of presentations, panels, and workshops… The event is free and open to the public.

Note that there is a single track of programming. With one program per time period, I’ll be able to attend every program offered. Following is the scheduled programming.

Title: Seeing Sound
Description: Workshop on how to build a sound visualization system and "discuss why it’s completely useless."
Presenter: Lucas Kuzma
Comment Having written my own sound visualization programs and as a presenter on the subject (see Live Art – Interactive Audio Visualizations) I am very anxious to hear Mr. Kuzma’s take on the subject.
Title: Do You Need A CS Degree To Build Apps?
Description: A presentation dealing with whether or not a college degree in computer science is really necessary to be a successful software designer and programmer.
Presenter: Brandon Passley
Comment I will say this: in terms of the knowledge of specific programming languages I obtained in college, I’ve used none of that knowledge since graduating. The skills I learned while in college that have served me well are those associated with how to go about writing a program and how, in general, programming languages work. Mindset and experience is really what I gained from my college computer science classes. FYI – I received my masters in computer science
and was just one class shy of also qualifying for a bachelors degree in computer science.
Title: Breaking Barriers With Sound
Description: A presentation by Stanford Professor and Smule Co-founder Ge Wang about computer music, mobile music, laptop orchestras, and apps.
Presenter: Ge Wang
Comment Another must see for me – especially since I am current enrolled in the Coursera class Introduction to Programming for Musicians and Digital Artists in which I am learning to use the ChucK programming language to create electronic music. In fact one of the instructors for the class is Ge Wang, the creator of the ChucK programming language.
Title: Off-The-Grid: Create Peer-To-Peer Collaborative Networks
Description: A discussion on collaboration using peer-to-peer wireless networks (WiFi, Bluetooth, and NFC technologies) with the Ketai library for Processing
Presenter: Daniel Sauter/Jesus Duran
Comment This is a new topic area for me.
Title: Drawing Machines
Description: A workshop on Processing coding techniques for creating customized "drawing machines".
Presenter: JD Pirtle
Comment Another one I must attend as I have used Processing extensively creating quite a few of my own drawing machines.
Title: The Technology Landscape For Women And Issues Of Gender
Description: A panel about women in computing and why there are a smaller proportion of women in the field today than there were in the 80s.
Presenter: Amanda Cox, Marie Hicks, Lisa Yun Lee
Comment I must say I’m curious as to where these ladies are coming from and what they’ll have to say on the subject. According to a Wikipedia article on women in computing, In the United States, the number of women represented in undergraduate computer science education and the white-collar information technology workforce peaked in the mid-1980s, and has declined ever since. In 1984, 37.1% of Computer Science degrees were awarded to women; the percentage dropped to 29.9% in 1989-1990, and 26.7% in 1997-1998. Of course percentages can be deceiving. Left unanswered is the
percentage of the female population so engaged in the 80s vs today. Also from the same article:
A study of over 7000 high school students in Vancouver, Canada showed that the degree of interest in the field of computer science for teenage girls is comparably lower than that of teenage boys. The same effect is seen in higher education; for instance, only 4% of female college freshmen expressed intention to major in computer science in the US. I am curious to here how this issue is addressed.
Title: Seeing Sound
Description: A workshop for developing sonic visualizations including various methods for converting audio into images using openFrameworks.
Presenter: Lucas Kuzma
Comment From the description: Participants are expected to have a working copy of Xcode, as well as well as working knowledge of C++. Oops, Xcode is the IDE (Integrated Development Environment) for the Apple OS. I’ve played with openFrameworks before but found it to be more code heavy than Processing due to OpenGL issues. Unfortunately I do not currently have an appropriate IDE installed for openFrameworks on my Windows laptop.
Title: Fast And Slow: Mobile Aesthetics And Civil Liberties
Description: Described as a discussion on how to empower a new generation of makers to participate in shaping the technological artifacts that shape us socially and culturally.
Presenter: Daniel Sauter
Comment This could go either way – we’ll see what happens.
Title: Sketching The News
Description: A look at some data visualization projects at the New York Times.
Presenter: Amanda Cox
Comment Another subject area in which I have interest and have done some work.
Title: Processing Shaders, The Sunday Sessions
Description: A workshop about GLSL shaders in Processing 2.0 with the main objectives being to present advanced applications of the shader API, specifically post-processing image filters and blending, procedural generation of 3D objects using fragment shaders, iterative effects with the pframe buffer, and shading of large-scale geometries.
Presenter: Andres Colubri
Comment Major changes were made between the Processing 1.xxx and 2.xxx versions. Most significant was the move towards OpenGL integration. This caused me some real headaches as Processing 2 just wouldn’t work properly on my computer. However upgrading my graphics card drivers did solve the problem (though it took some doing and hurdle jumping to accomplish). For more, see Shaders in Processing 2.0.
Title: Creative Coding on the Raspberry Pi with openFrameworks
Description: Like the title says, Creative Coding on the Raspberry Pi with openFrameworks. Raspberry Pi Hardware will be provided for use during the workshop. Participants are encouraged to bring a laptop.
Presenter: Christopher Baker
Comment If you have never of it, you can find out all about Raspberry Pi here and read the Raspberry Pi FAQ.

The Mobile Processing Conference is being held at the UIC Innovation Center located at 1240 W Harrison St in Chicago, IL. For information about the conference, visit the Mobile Processing Conference web site

Bookmark it:  Stumble It  Bookmark this on Delicious  Digg This  Technorati  Reddit Tweet It