Blog: [Blog Home] [Archives] [Search] [Contact]

Archive for the ‘Algorithmic Art’ Category

Blurb vs Lulu to Publish an Art Book

Tuesday, June 2nd, 2015

Illustrations for Jim Plaxco's Algorithmic Art Book
Illustrations for my algorithmic art book

Earlier this year I made the decision to publish a book of algorithmic art. Algorithmic art was my introduction to the world of computer art – which is also known as digital art and/or new media art – depending on who you ask.

My first challenge was to decide how the book would be organized. My next challenge was to identify what and how much algorithmic art would be included in the book. My third challenge was to write the supporting text for the book.

The answer to the question of how to organize the book came quite quickly. A general introduction followed by sections which would each feature a particular style of my algorithmic art.

Choosing the art for the book was also fairly straight forward. Given my shortcomings in not taking the time to add newly created art to my web site, I choose 76 algorithmic artworks that I had not published to my web site. In all honesty, for me the satisfaction is in the process of creating art and not in the marketing of art.

The difficult part with respect to selecting art was in dealing with the question of how many pages of illustrations did I want my book to contain. Why? Because the number of pages directly impacts the book’s manufacturing cost and consequently the list price for the buyer.

To answer the question of how many pages it is necessary to take a step back. While I was assembling the book I was also looking in to the question of how I would go about getting the book published. The three options available to me were the traditional publishing route, the traditional self-publishing route, and the newer POD (print-on-demand) electronic publishing route.

For purposes of this discussion, let’s just say that I’ve chosen the electronic POD publishing route. My three finalists in this category were Amazon’s CreateSpace, Blurb, and Lulu. I quickly ruled out CreateSpace as it is not well suited for the publishing of art and/or photography books. That left me with Blurb and Lulu.

Ignoring all other considerations for the moment, I’m going to look only at book production costs. It is the production costs that are going to have the main impact on my book’s affordability. Given that I’ve narrowed my choice of publishers down to Blurb or Lulu, it’s time to look at the base cost of a photo book on each platform.

Impacting cost are size, cover, and paper options. With respect to size and format, my options are:

BLURB Photo Book Sizes
Format Size in Inches
Small Square 7 x 7
Standard Portrait 8 x 10
Standard Landscape 10 x 8
Large Format Landscape 13 x 11
Large Square 12 x 12
LULU Photo Book Sizes
Format Size in Inches
Square 8.5 x 8.5
Landscape 9 x 7
Large Landscape 12.75 x 10.75
Portrait 8.5 x 11

Question: was it a conscious or unconscious decision that led Blurb and Lulu to insure that neither would offer photo books of the same physical dimensions?

Here is where I came to the question of cost. I’m going to accept the default options for each publisher without really knowing whether or not the quality of the two publisher’s defaults are really equal.

From Blurb, I like the large landscape hardback format with a page size of 13 x 11 and 100# paper quality. But look at the book’s base cost by page count!

  • the cost for a 20 page photo book is $69.86
  • the cost for a 60 page photo book is $93.86
  • the cost for a 80 page photo book is $105.86

And since I would like to be able to make some money on the sale of each book, my markup will be an add-on to the base cost. Ouch.

These high manufacturing costs present artists and photographers with a real dilemma. We would like to pack more art and photography into our books in order to provide the buyer with depth and diversity. However, pushing against that is the cost of publication. I would really like to be able to offer a book that is at least 80 pages long but with the cost of such a book being over $100, how many people could afford to buy it?

Taking a look at Lulu, their equivalent book is the 12.75 x 10.75 casewrap hardcover also with 100# paper quality. Pricing for this book would be:

  • the cost for a 20 page photo book is $44.39
  • the cost for a 60 page photo book is $68.39
  • the cost for a 80 page photo book is $80.39

While cheaper than Blurb, it is still not what I deem affordable. If I was willing to downgrade on quality, I could go with Lulu’s 9 x 7 landscape paperback which also has a lower quality paper (80#). If I do that, the costs become:

  • the cost for a 20 page photo book is $12.59
  • the cost for a 60 page photo book is $28.59
  • the cost for a 80 page photo book is $36.59

This is affordable but I’ve also gone with a lower quality book. If I want quality, the only way to get the total cost down is to cut the number of pages in the book. Of course there’s a hidden cost there. Using the Blurb large landscape hardcover as an example, the 20 page version has a total cost of $69.86 but a per page cost of 3.49 – whereas the 80 page version, whose total cost is $105.86, costs only 1.32 per page. So while cutting pages lowers the book’s total cost, it also increases the cost per page – which makes perfect sense if you stop to think about it. And just for perspective, if I show up at the local copy shop and want to make a two-sided color photocopy on generic copier paper, that will cost me over a dollar per page.

These costs demonstrate the basic problem of publishing with electronic PODs – they just don’t have the same economies of scale that you get with traditional publishing and printing. If I went the traditional self-publishing route I could purchase an inventory of books with the equivalent or better quality than my Blurb preference for perhaps 1/10 the cost per book. However that would require that I buy an entire run of books – which means a very large up-front expense on my part. Also, I would then have to take on the added responsibility of distribution – a task I have no expertise at.

So what do to? I’ll provide updates here as I make progress towards reaching a final decision.

Reference Links

Bookmark it:  Stumble It  Bookmark this on Delicious  Digg This  Technorati  Reddit Tweet It


What’s New? Latoocarfians and Talks

Saturday, April 11th, 2015

latoocarfian chaotic function
A Latoocarfian chaotic function

After a long hiatus, I’ve finally published new content to my Artsnova Digital Art Gallery web site. Frankly, I’ve been busy with other projects and updating my Artsnova web site got pushed down the queue. (Note that this blog is actually separate from the web site.) In addition to agreeing to manage the Enterprise in Space Orbiter Design Contest last fall, I undertook an even larger project – designing and launching my own photography web site at Jim Plaxco Photography. Still in my queue of to-do items is converting this blog into a design that is mobile-device friendly – moving up the queue as a consequence of Google’s soon to be updated search ranking algorithm which will push non-mobile-friendly sites further down the search engine results page.

I’ve also been busy working on an algorithmic art book project. To date I have created approximately 40 plus illustrations for the book. My target for the total number of illustrations is in the 60 to 80 range – with the final count depending upon how the extra illustrations will impact the book’s final cost and price. This will be what I hope is the first in a series of books on different forms of computer art.

Two additions I made were two new art presentations: The Beauty of Algorithmic Art and Designing Algorithmic Art: From Concept to Realization. Both of these talks draw heavily on the work I am doing on my algorithmic art book. I will be giving one, possibly both, of these talks at a regional MENSA convention this fall.

I’ve also added a new computer art tutorial – Latoocarfian Chaotic Function Tutorial. This tutorial explains some of the function’s math while providing the source code for a Processing programming language implementation. This is a beginner’s level tutorial and will hopefully encourage someone to more seriously consider this avenue of artistic creation.

In exploring these Latoocarfian chaotic functions I decided to create an expanded variation for the creation of additional art for my book project. One example is A Day In The Life Of A Latoocarfian – so titled as I had the program that was creating this particular artwork run for one full day on a separate dedicated computer.

This week I was also working on an interactive generative painting program which is now basically complete and which I will be writing about in my next blog post.

In closing, I am still collecting input for my digital art and photography newsletter. If you would like to provide input, then please take part in the short, brief survey available at Artsnova Digital Art and Photography Newsletter Survey

Bookmark it:  Stumble It  Bookmark this on Delicious  Digg This  Technorati  Reddit Tweet It


Video: The Liquified Paintings of Claude Monet

Monday, July 7th, 2014

Liquified Paintings of Claude Monet
The Liquified Paintings of Claude Monet Video

Since setting up an account on YouTube towards the end of last year, I confess to not having been active on that platform. I created the account for the purpose of publishing several video portfolios to promote my art. The plan was to create a video for each area of artistic creation I am working in. I created exactly one portfolio video and that was for my portrait art. That was my first attempt at making a video and you can see it here: Portrait Art Video. If you’re interested in the story of how I went about making that video, read Portrait Art Video Project.

The experience of creating that video got me interested in creating some original animations of my own. Since that time I’ve only posted two videos exploring animation. One I dubbed the Swimming Eye Art Video. The other was a crude quickie experiment in animating an image – Sailing A Stormy Sea Video.

For this new video I wanted to create something that would feature the art of the great impressionist painter Claude Monet. I have recently been experimenting with vector fields and their utility as an algorithmic means of creating flowing brush strokes. It occurred to me that I could use this technique to create a series of liquified paintings that would evolve. And that’s how The Liquified Paintings of Claude Monet video was born. And here it is.

The video captures the evolution of six separate paintings and the transition from one to the next. For me it is the transition between paintings that is visually the most interesting. One thing you may have noticed is the very slow evolution of the first painting. It is no coincidence that this first painting is the darkest of the six paintings. You see I tied the speed of evolution to the overall brightness of the image.

Given that these paintings have been "liquified", I deliberately chose artworks by Monet that featured water, be it a pond, a stream, a river, or the ocean. Following are image stills from the video and the name of the Monet painting that was used as the color source at that point during the video.

Claude Monet Impression, Soleil Levant
Claude Monet – Impression, Soleil Levant

Claude Monet The Argenteuil Bridge
Claude Monet – The Argenteuil Bridge

Claude Monet Morning By The Sea
Claude Monet – Morning By The Sea

Claude Monet Autumn On The Seine At Argenteuil
Claude Monet – Autumn On The Seine At Argenteuil

Claude Monet Poplars At The Epte
Claude Monet – Poplars At The Epte

Claude Monet Water Lilies
Claude Monet – Water Lilies

The most time consuming aspect of this project was writing the program that produced the video stills. In all I used 3272 image stills (not counting the title and trailer images) to create this video.

Graphics Software Used

I created this video using several different software packages. The liquified/animated images used to construct the video were created with a program I wrote using the Processing creative coding platform which is a framework built on Java. As a programming language, Processing is easily the best language for non-programmers interested in creative coding projects. To stitch the individual images together into a video, I used the DOS command line utility FFMPEG. To create my title and trailer images I used Adobe Photoshop CS4. Note that my workflow would have consisted entirely of "free" software if I had used GIMP (GNU Image Manipulation Program) to create these two images. For the soundtrack file, I used Audacity to edit the mp3 sound file.The soundtrack music is Laideronnette Imperatrice Des Pagodes by Maurice Ravel. Finally to assemble everything I used Microsoft’s Windows Live Movie Maker which came bundled with Windows 7.

In Conclusion

If you would like to know more about Claude Monet, you may want to read this biography of Claude Monet. If you are of a technical bent, there is this Wikipedia entry for vector fields which served as the painting foundation upon which my Processing program was built. And while I don’t often add new videos, you may want to follow me on YouTube.

I’ll close with a couple of noteworthy quotes.

When you go out to paint, try to forget what objects you have before you – a tree, house, a field….Merely think, here is a little square of blue, here an oblong of pink, here a streak of yellow, and paint it just as it looks to you, the exact color and shape, until it gives your own naive impression of the scene before you. – Claude Monet

A preliminary drawing for a wallpaper pattern is more highly finished than this seascape. – French art critic Louis Leroy in 1874 commenting on Monet’s Impression, Sunrise

Bookmark it:  Stumble It  Bookmark this on Delicious  Digg This  Technorati  Reddit Tweet It


Generative Art and the Mona Lisa Meme

Friday, March 28th, 2014

Generative Art and the Mona Lisa
Generative Art from the Mona Lisa
left to right: Generation 3922, 5826, and 8187

I want to share with you the results of a recent experiment of mine using a creative process known as generative art. Personally I find that the most interesting aspect of generative art is in being surprised by the artwork that a system produces. Generative systems can produce artistic outcomes not anticipated by the programmer/artist and the image above is one such example. On the left is an image of the Mona Lisa as it appears after 3,922 generations of the generative art program I wrote. On the far right is the same image after 8,187 generations.

What is Generative Art?

For the purposes of my discussion here I’ll rely on an excerpt from the Wikipedia definition of generative art:

Generative art refers to art that in whole or in part has been created with the use of an autonomous system. An autonomous system in this context is generally one that is non-human and can independently determine features of an artwork that would otherwise require decisions made directly by the artist… Generative Art is often used to refer to computer generated artwork that is algorithmically determined.

Source: Wikipedia definition of Generative Art

Why are you picking on the Mona Lisa?

When testing out various programs that rely on the use of a source image for input it is quite useful to have a single standard image to use. That makes it much easier to compare the workings of different programs. An analogy is that of Playboy centerfold and Swedish model Lena Söderberg. Lena was the centerfold for the November 1972 issue of Playboy magazine. Her centerfold photograph was first used as a test image for image processing experiments in the summer of 1973 at the USC Signal and Image Processing Institute (SIPI). Subsequently this photograph became a standard source image for the testing of image processing algorithms. In explaining the decision for the use of this image, David C. Munson, editor-in-chief of IEEE Transactions on Image Processing, had this to say:

"First, the image contains a nice mixture of detail, flat regions, shading, and texture that do a good job of testing various image processing algorithms. It is a good test image! Second, the Lena image is a picture of an attractive woman. It is not surprising that the (mostly male) image processing research community gravitated toward an image that they found attractive."

My test image of choice is Leonardo da Vinci’s painting Mona Lisa (La Gioconda in Italian). Because this painting is so well known and accessible, it makes it easier for people to "see" the results of an manipulation, distortion, or derivation of the original.

My Oscillating Generators

The generative art program that I wrote, which produced the illustrations at the head of this post, relies on a system of generators. You can think of each generator as simply being an independently functioning paintbrush.

In this particular run, I used 9200 generators (paintbrushes). Each generator (brush) has several characteristics: size, location, color, opacity, movement. In creating this system my idea was that each paintbrush would hover in the vicinity of its original location without straying too far. However, I did not provide a rule to enforce this behavior. Rather I left each brush free to go its own way.

To govern direction and speed I used a Perlin noise function that on the face of it was balanced. By balanced I mean that the system should have had no preferential direction. I was very much surprised at the results (shown above) from one of the several rule sets I had created.

For simplicity, each generator is unaware of the other generators in the system. For the next generation of this system, I plan on creating interacting generators. In such a system, when two generators encounter one another, they will react and/or interact. For example each could share with the other some or all of its characteristics. Each of these characteristics can be thought of as genetic material that can be shared.

So that you can better see the detailed progression of the system, I’m providing a large (1600 x 1600) image that shows the same subsection of the artwork as it progresses through generations. The leftmost section is from generation 3922, the middle section is from generation 5826, and the rightmost is from generation 8187.

Open image in new window – Generative Art Mona Lisa Triptych

For other examples of how the image of Mona Lisa has been used, check out the Processing artwork tagged with Mona Lisa at OpenProcessing.org

Bookmark it:  Stumble It  Bookmark this on Delicious  Digg This  Technorati  Reddit Tweet It


Swimming Eye Video

Monday, December 16th, 2013

Swimming Eye Video
Swimming Eye Video

I’ve just completed a video project titled Swimming Eye. This was yet another accidental project on my part as I was not planning on creating a video. Rather I was experimenting with using Processing to create an algorithmic painting program.

In experimenting with applying Perlin noise to a gridded particle field to create a large algorithmic paintbrush, I was struck my the nature of the ensuing motion. It was similar to that of a liquid surface in motion. The impression it made on me was that of a living painting: it wasn’t a static image but an image that had a life of its own.

My original idea of creating some rather unusual digital paintings using this methodology was replaced with the idea of creating a video. The image used as illustration above is representative of my original idea. It was created by stacking several individual movie frames together in Photoshop and using different layer blend modes to merge the individual images together.

Previously I wrote about using Windows Live Movie Maker to create a YouTube video (see Portrait Art Video Project). However I found that Movie Maker was not capable of turning jpeg images into a real movie. With Movie Maker, an image must remain on display for at least one second. This is fine if you want to use images to create a video slide show. However, it does not work when it comes to creating an animation. To translate my 1400 images into a movie, I wanted each image (frame) to display for 1/30th of a second (think 30 frames per second).

I tried using Avidemux but it crashed repeatedly. In searching I came across FFMPEG – a DOS command line utility. It worked. With the basic video created my next step was to come up with a soundtrack because I really didn’t want to create a silent movie.

Searching opsound.org, I located a public domain song that met my needs (thanks to Free Sound Collective for making their music available for use). I used Audacity to create a sound clip of the necessary time length. I used Movie Maker to add the mp3 to the video created by FFMPEG.

I hope you enjoy the show.

Don’t forget – art is in the eye of the beholder.

See the Swimming Eye Video on YouTube

Bookmark it:  Stumble It  Bookmark this on Delicious  Digg This  Technorati  Reddit Tweet It


Paper + Salt Solution + Electricity = Art

Tuesday, August 6th, 2013

Paper + Salt Solution + Electricity = Art
Paper+Salt Solution+Electricity=Art

I was at Musecon this last weekend – both as an attendee and as a presenter. For more, see Musecon 2013 Creatives Convention. One of the programs I attended was Stand Back, I’m Going To Try Science! taught by Todd Johnson. One of the demonstrations Todd gave was of brushing a salt water solution on matboard in a path that connected two electrodes. He then turned on the electricity and the audience watched as a fractal-like path was burned into the paper by the electric current. Todd was able to exercise some control over the process by adjusting the amount of current being delivered and by using water to cool down hot spots. I found the pattern and coloring this process created to be fascinating. The photograph used to illustrate this article (above) is of one of Todd’s creations. Todd is perhaps best known for using a particle accelerator to zap acrylic blocks with millions of volts of electricity and then freeing the trapped electrons thus creating Lichtenberg figures.

As I watched Todd creating these fascinating figures it occurred to me to try and create an algorithmic art version of what I was seeing. The solution that immediately came to mind was to use a diffusion limited aggregation (DLA) algorithm. Diffusion Limited Aggregation (DLA) is an algorithm for the simulation of the formation of structures by particles that are diffusing (moving) in some medium, in this case the medium being the surface of a virtual canvas. Such algorithms can be simple or complex. For example, in a more sophisticated DLA algorithm the particles can be made to interact with one another as they move. With respect to the medium, gravity and/or currents can be introduced to further influence the particle’s behavior. The rules by which particles create structure can also be defined in various ways with the precise nature of the resulting structure being governed by the full interplay of the system’s rules and parameters.

I used the Processing programming language to implement my idea. The basic operation of the program allowed for:

  • the creation and destruction of particles;
  • variable opacity for the structure with opacity determined by the number of times a point was "hit" by a particle;
  • the ability for the user to create particle emitters to inject new particles into the system and to control where those particles appear;
  • the ability to globally modify particle velocity (note that velocity is a vector having both speed and direction);
  • the ability for the user to create seeding points for the structure by drawing with the mouse.

The picture below is the first image created after completing program debugging – which surprisingly was less of a hair-pulling experience than I expected it to be. Note that I used Adobe Photoshop to add a layer style that added a border stroke, a drop shadow, and a color overlay to enhance the appearance of the structure.

Diffusion Limited Aggregation algorithmic art example
Diffusion Limited Aggregation algorithmic art example

As you can see, the structure created using my DLA algorithm is much more bushy than the structure created by Todd. One key difference between our two methods is that Todd’s creation process builds structure from the inside-out while the DLA process I used builds structure from the outside-in.

To see other examples of images created using diffusion limited aggregation, I suggest doing an image search using the term "diffusion limited aggregation". You will see a high degree of sameness to images created using this technique.

And the algorithm’s future…

I’m uncertain at this point as to what further development work I will do on this program. There are algorithmic alternatives to the diffusion limited aggregation approach I used. There are also many modifications I could make to my DLA implementation that would alter its behavior.

In closing I’ll leave you with this thought. Speaking as a programmer it is said that Ideas are cheap. Code isn’t. Speaking as a digital artist I can say Code is cheap. Ideas aren’t. The truth of either statement really depends on your perspective.

Bookmark it:  Stumble It  Bookmark this on Delicious  Digg This  Technorati  Reddit Tweet It