Category Archives: Processing

ALOPEZ_BEFOREUS1

Before Us is the Salesman’s House

Before us is the Salesman's House

When the dust settles on the 21st century, and all of the GIFs have finished animating, the most important cultural artifacts left from the digital age may very well be databases.

How will the societies of the future read these colossal stores of information?

Consider the eBay databases, which contain information for every transaction that happens and has happened on the world’s biggest marketplace. $2,094 worth of goods are sold on eBay every second. The records kept about this buying and selling go far beyond dollars and cents. Time, location and identity come together with text and images to leave a record that documents both individual events, as well as collective trends across history and geography.

This summer, Mark Hansen and I created an artwork, installed at the eBay headquarters in San Jose, which investigates this idea of the eBay database as a cultural artifact. Working in cooperation with eBay, Inc., and the ZERO1 Biennial, the piece was installed outside of the eBay headquarters and ran dusk to midnight from September 11th to October 12th.

As a conceptual foundation for the piece, we chose a much more traditional creative form than the database: the novel. Each movement begins with a selection of text. The first one every day was a stage direction from the beginning of Death of a Salesman which reads:

A melody is heard, played upon a flute. It is small and fine, telling of grass and trees and the horizon. The curtain rises.
Before us is the Salesman’s house. We are aware of towering, angular shapes behind it, surrounding it on all sides. Only the blue light of the sky falls upon the house and forestage; the surrounding area shows an angry glow of orange. As more light appears, we see a solid vault of apartment houses around the small, fragile-seeming home. An air of the dream dings to the place, a dream rising out of reality. The kitchen at center seems actual enough, for there is a kitchen table with three chairs, and a refrigerator. But no other fixtures are seen. At the back of the kitchen there is a draped entrance, which leads to the living room. To the right of the kitchen, on a level raised two feet, is a bedroom furnished only with a brass bedstead and a straight chair. On a shelf over the bed a silver athletic trophy stands. A window opens onto the apartment house at the side.

From this text, we begin by extracting items1 that might be bought on eBay:

Before us is the Salesman's House

Flute, grass, trees, curtain, table, chairs, refrigerator. This list serves now as a kind of inventory, each explored in a small set of data sketches which examine distribution: Where are these objects being sold right now? How much are they being sold for? What does the aggregate of all of the refrigerators sold in the USA look like?

Before us is the Salesman's House

Before us is the Salesman's House

Before us is the Salesman's House

Before us is the Salesman's House

From this map of objects for sale, the program selects one at random to act as a seed. For example, a refrigerator being sold for $695 in Milford, New Hampshire, will switch the focus of the piece to this town of fifteen thousand on the Souhegan river. The residents of Milford have sold many things on eBay over the years – but what about books? Using historical data, we investigate the flow of books into the town, both sold and bought by residents.

Before us is the Salesman's House

Before us is the Salesman's House

Before us is the Salesman's House

Finally, the program selects a book from this list2 and re-starts the cycle, this time with a new extracted passage, new objects, new locations, and new stories. Over the course of an evening, about a hundred cycles are completed, visualizing thousands of current and historic exchanges of objects.

Ultimately, the size of a database like eBay’s makes a complete, close reading impossible – at least for humans. Rather than an exhaustive tour of the data, then, our piece can be thought of as a distant reading3, a kind of a fly-over of this rich data landscape. It is  an aerial view of the cultural artifact that is eBay.

A motion sample of three movements from the piece can be seen in this video.

Before Us is the Salesman’s House was projected on a 30′ x 20′ semi-transparent screen, suspended in the entry way to the main building (I’m afraid lighting conditions were far from ideal for photography). It was built using Processing 2.0, MongoDB & Python. Special thanks to Jaime Austin, Phoram Meta, Jagdish Rishayur, David Szlasa and Sean Riley.

  1. Items are extracted through a combination of a text-analysis algorithm and, where needed, processing by helpful folks on Mechanical Turk.
  2. All text used comes from Project Gutenberg, a database of more than 40,000 free eBooks
  3. For more about distant reading, read this essay by Franco Moretti, or, for a summary, this article from the NYTimes
InfiniteWeftDetail

Infinite Weft @ Bridge Gallery until October 18th

Infinite Weft -

Since early in the year, I have been working with my mother Diane Thorp to produce hand-woven textiles that contain non-repeating patterns. Weaving Information Files (WIFs) are produced via a custom-written software tool, and are then woven on a 16-harness floor loom equipped with an AVL Compu-Dobby interface.

Here’s a zoomable view of six metres (almost 20 feet) of the handwoven result:

You really (really) want to hit the fullscreen button and zoom in – it’s a 95 megapixel (25,283 x 3,738) image. Alternately, you can go here to see it in a bigger window.

You can read more about the project here, and you can see a set of images documenting the project here. The next step is to weave a much longer section – we are aiming for something above 100′.

If you’re in New York, you can see the 6 metre long section from Infinite Weft, on exhibit at Bridge Gallery in NYC, until October 10th.

Bridge Gallery
98 Orchard Street
New York, NY 10002
Subway F, J, M, Z Delancey/Essex

InfiniteWeftDetail

Infinite Weft (Exploring the Old Aesthetic)

How can a textile function as a digital object? This is a central question of Infinite Weft, a project that I’ve been working on for a the last few months. The project is a collaboration with my mother, Diane Thorp, who has been weaving for almost 40 years – it’s a chance for me to combine my usually screen-based digital practice with her extraordinary hand-woven work. It’s also an exploration of mathematics, computational history, and the concept of pattern.

Most of us probably know that the loom played a part in the early days of computing – the Jacquard loom was the first machine to use punch cards, and its workings were very influential in the early design of programmable machines (In my 1980s basement this history was actually physically embodied; sitting about 10 feet away from my mother’s two floor looms, on an Ikea bookself, sat a box of IBM punch cards that we mostly used to make paper airplanes out of). But how many of us know how a loom actually works? Though I have watched my mother weave many times, it didn’t take long at the start of this project to realize that I had no real idea how the binary weaving patterns called ‘drawdowns‘ ended up making a pattern in a textile.

IW - Process - January 8th 2012

To teach myself how this process actually happened, I built a functional software loom, where I could see the pattern manifest itself in the warp and weft (if you have Chrome you can see it in action here – better documentation is coming soon). This gave me a kind of sandbox which let me see how typical weaving patterns were constructed, and what kind of problems I could expect when I started to write my own. And run into problems, I did. My first attempts at generating patterns were sloppy and boring (at best) and the generative methods I was applying weren’t very successful. Enter Ralph E. Griswold.

Ralph E. Griswold

Ralph Griswold was a pioneering computer scientist, best known for developing the string programming language SNOBOL. He spent a decade at Bell Labs, studying non-numerical computation, and went on to become Regents’ Professor at the University of Arizona. After this illustrious career in computing, he shifted his attention to the mathematics of weaving. Mr. Griswold died in 2006, but he left behind a huge archive of resources for weavers and curious learners, including academic papers on pattern generation using cellular automata.

The first successful pattern possibilities for Infinite Weft came from applying and modifying the techniques in the paper. I built a simple interface in which I could advance the automata generation by generation, and switch between a set of very simple rules (courtesy of John von Neumann). Here’s what a sample pattern generated from these von Neumann automata looks like on the software loom:

von Neumann automata patterns on a software loom

And here’s a sample, woven on a table loom with black & white yarn to make the pattern clear:

Infinite Weft - Samples

While these techniques produce fairly satisfactory results, the automata themselves tended to repeat after not too many generations – while you can alternate between rules, and start with different ‘seed’ patterns, and adjust the threading of the loom to get a variety of finished patterns, the systems themselves would inevitably repeat. What about a truly infinite weft?

As it turns out, there are some cellular automata that are non-repeating. Given any generation N, the result of the next generation, N+1, can’t be predicted from the outcomes that have happened before. Could I apply such an automata to generate an infinite ‘pattern’? Well, I gave it a try, and the results look promising. Here is a look at a pattern generated using Wolfram’s Rule 30, a (quite possibly) universal cellular automaton:

IW - Process - Jan. 8 2012

And a similar pattern, hand-woven by my mother:

Infinite Weft - Samples

Now we get into some pretty interesting conceptual territory. In theory, a long enough stretch of this woven textile would be Turing-complete – a computable fabric. Embedded somewhere in the pattern would be instructions to solve any conceivable problem. Past the math, this system also lets us challenge what we think of as a pattern, in a fabric context (after all, this pattern has really no pattern at all).

This project is still very much a work in progress – this blog post is a peak into the process and chance to get some of my thoughts into writing. The next obvious step is to finalize work on the pattern generation, and get some large-scale textile woven from my mother’s ‘real loom’, which is a 16-harness floor loom (for this we’re going to need a computerized dobby head, which is a bit of an investment). I would also love to see other weavers outputting sections of this ‘infinite’ weft – please get in touch if you have a loom and would like to try weaving a section.

Source code for Infinite Weft is available in a public GitHub repository here.

And, as always, please don’t hesitate to leave a comment if you have any questions or suggestions.

Tutorial: Processing, Javascript, and Data Visualization

[The above graphic is an interactive 3D bar graph. If you can’t see it, it’s probably because your browser sucks can’t render WebGL content. Maybe try Chrome or Firefox?]

Ben Fry and Casey Reas, the initiators of the Processing project, announced at the Eyeo Festival that a new version of Processing, Processing 2.0, will be released in the fall (VIDEO). This 2.0 release brings a lot of really exciting changes, including vastly improved 3D performance, and a brand-new high-performance video library. Perhaps most interesting is the addition of publishing modes – Processing sketches can now be published to other formats other than the standard Java applet. The mode that is going to cause the most buzz around the interweb is Javascript mode: yes, Processing sketches can now be published, with one click, to render in pretty much any web browser. This is very exciting news for those of us who use Processing for data visualization – it means that our sketches can reach a much broader audience, and that the visual power of Processing can be combined with the interactive power of Javascript.

While preview releases of Processing 2.0 will be available for download very soon, you don’t have to wait to experiment with Processing and Javascript. The upcoming JS mode comes thanks to the team behind Processing.js, and we can use it to build sketches for the browser right now.

To get started, you can download this sample file. When you unzip the file, you’ll see a directory with three files – the index.html page, processing.js, and our sketch file, called tutorial.pde. If you open the sketch file in a text editor, you’ll see something like this:

void setup() {
  //instructions in here happen once when the sketch is run
  size(500,500);
  background(0);
}

void draw() {
  //instructions in here happen over and over again
}

If you’re a Processing user, you’ll be familiar with the setup and draw enclosures. If you’re not, the notes inside of each one explain what’s going on: when we run the sketch, the size is set to 500px by 500px, and the background is set to black. Nothing is happening (yet) in the draw enclosure.

You can see this incredibly boring sketch run if you open the index.html file in your browser (this will work in Firefox and Safari, but not in Chrome due to security settings).

Processing.js is a fully-featured port of Processing – all of the things that you’d expect to be able to do in Processing without importing extra libraries can be done in Processing.js. Which, quite frankly, is amazing! This means that many of your existing sketches will run, with no changes, in the browser.

Since this is a data visualization tutorial, let’s look at a simple representation of numbers. To keep it simple, we’ll start with a set of numbers: 13.4, 14.5, 15.0, 23.2, 30.9, 31.3, 32.9 35.1, 34.3. (These numbers are obesity percentages in the U.S. population, between the 1960s and the present – but we’re really going to use them without context).

Typically, if we were to put these numbers into a Processing sketch, we’d construct an array of floating point numbers (numbers with a decimal place), which would look like this:

float[] myNumbers = {13.4, 14.5, 15.0, 23.2, 30.9, 31.3, 32.9, 35.1, 34.3};

However, we’re not in Java-land anymore, so we can make use of Javascript’s easier syntax (one of the truly awesome things about Processing.js is that we can include JS in our sketches, inline):

var numbers = [13.4, 14.5, 15.0, 23.2, 30.9, 31.3, 32.9, 35.1, 34.3];

Our .pde file, then, looks like this:


var numbers = [13.4, 14.5, 15.0, 23.2, 30.9, 31.3, 32.9, 35.1, 34.3];

void setup() {
  //instructions in here happen once when the sketch is run
  size(500,500);
  background(0);
}

void draw() {
  //instructions in here happen over and over again
}

Now that we have our helpful list of numbers (called, cleverly, ‘numbers’), we can access any of the numbers using its index. Indexes in Processing (and Javascript) are zero-based, so to get the first number out of the list, we’d use numbers[0]. To get the fourth one, we’d use numbers[3]. Let’s use those two numbers to start doing some very (very) simple visualization. Indeed, the first thing we’ll do is to draw a couple of lines:

var numbers = [13.4, 14.5, 15.0, 23.2, 30.9, 31.3, 32.9, 35.1, 34.3];

void setup() {
  //instructions in here happen once when the sketch is run
  size(500,500);
  background(0);
  
  //Draw two lines
  stroke(255);
  line(0, 50, numbers[0], 50);
  line(0, 100, numbers[4], 100);
}

void draw() {
  //instructions in here happen over and over again
}

OK. I’ll admit. That’s a pretty crappy sketch. But, we can build on it. Let’s start by making those lines into rectangles:

var numbers = [13.4, 14.5, 15.0, 23.2, 30.9, 31.3, 32.9, 35.1, 34.3];

void setup() {
  //instructions in here happen once when the sketch is run
  size(500,500);
  background(0);
  
  //Draw two lines
  stroke(255);
  rect(0, 50, numbers[0], 20);
  rect(0, 100, numbers[4], 20);
}

void draw() {
  //instructions in here happen over and over again
}

At this point, I’m going to introduce my very best Processing friend, the map() function. What the map() function allows us to do is to adjust a number that exists in a certain range so that it fits into a new range. A good real-world example of this would be converting a number (0.5) that exists in a range between 0 and 1 into a percentage:

50 = map(0.5, 0, 1, 0, 100);

We can take any number inside any range, and map it to a new range. Here are a few more easy examples:

500 = map(5, 0, 10, 0, 1000);
0.25 = map(4, 0, 16, 0, 1);
PI = map(180, 0, 360, 0, 2 * PI);

In our sketch, the list of numbers that we’re using ranges from 13.4 to 34.3. This means that our rectangles are pretty short, sitting in the 500 pixel-wide sketch. So, let’s map those numbers to fit onto the width of our screen (minus 50 pixels as for a buffer):

var numbers = [13.4, 14.5, 15.0, 23.2, 30.9, 31.3, 32.9, 35.1, 34.3];

void setup() {
  //instructions in here happen once when the sketch is run
  size(500,500);
  background(0);
  
  //Draw two rectangles
  rect(0, 50, map(numbers[0], 0, 34.3, 0, width - 50), 20);
  rect(0, 100, map(numbers[4], 0, 34.3, 0, width - 50),  20);
}

void draw() {
  //instructions in here happen over and over again
}

Two bars! Wee-hoo! Now, let’s use a simple loop to get all of our numbers on the screen (this time we’ll use the max() function to find the biggest number in our list):

var numbers = [13.4, 14.5, 15.0, 23.2, 30.9, 31.3, 32.9, 35.1, 34.3];

void setup() {
  //instructions in here happen once when the sketch is run
  size(500,500);
  background(0);
  
  for (var i = 0; i < numbers.length; i++) {
    //calculate the width of the bar by mapping the number to the width of the screen minus 50 pixels
    var w = map(numbers[i], 0, max(numbers), 0, width - 50);
    rect(0, i * 25, w, 20);
  }
}

void draw() {
  //instructions in here happen over and over again
}

We’re still firmly in Microsoft Excel territory, but we can use some of Processing’s excellent visual programming features to make this graph more interesting. For starters, let’s change the colours of the bars. Using the map() function again, we’ll colour the bars from red (for the smallest number) to yellow (for the biggest number):

var numbers = [13.4, 14.5, 15.0, 23.2, 30.9, 31.3, 32.9, 35.1, 34.3];

void setup() {
  //instructions in here happen once when the sketch is run
  size(500,500);
  background(0);
  
  for (var i = 0; i < numbers.length; i++) {
    //calculate the amount of green in the colour by mapping the number to 255 (255 red &amp; 255 green = yellow)
    var c = map(numbers[i], min(numbers), max(numbers), 0, 255);
    fill(255, c, 0);
    //calculate the width of the bar by mapping the number to the width of the screen minus 50 pixels
    var w = map(numbers[i], 0, max(numbers), 0, width - 50);
  	rect(0, i * 25, w, 20);
  }
}

void draw() {
  //instructions in here happen over and over again
}

This process of mapping a number to be represented by a certain dimension (in this case, the width of the bars along with the colour) is the basic premise behind data visualization in Processing. Using this simple framework, we can map to a whole variety of dimensions, including size, position, rotation, and transparency.

As an example, here is the same system we saw above, rendered as a radial graph (note here that I’ve added a line before the loop that moves the drawing point to the centre of the screen; I’ve also changed the width of the bars to fit in the screen):

var numbers = [13.4, 14.5, 15.0, 23.2, 30.9, 31.3, 32.9, 35.1, 34.3];

void setup() {
  //instructions in here happen once when the sketch is run
  size(500,500);
  background(0);
  
  //move to the center of the sketch before we draw our graph
  translate(width/2, height/2);
  for (var i = 0; i < numbers.length; i++) {
    //calculate the amount of green in the colour by mapping the number to 255 (255 red &amp; 255 green = yellow)
    var c = map(numbers[i], min(numbers), max(numbers), 0, 255);
    fill(255, c, 0);
    //calculate the width of the bar by mapping the number to the half the width of the screen minus 50 pixels
    var w = map(numbers[i], 0, max(numbers), 0, width/2 - 50);
  	rect(0, 0, w, 20);
  	//after we draw each bar, turn the sketch a bit
  	rotate(TWO_PI/numbers.length);
  }
}

void draw() {
  //instructions in here happen over and over again
}

So far we’ve been keeping things very simple. But the possibilities using Processing and Javascript are really exciting. It’s very easy to add interactivity, and it’s even possible to take advantage of Processing’s 3D functionality in the browser.

Here’s our numbers again, rendered in 3D (note the change to the size() function), with the rotation controlled by the mouse (via our new friend the map() function):

var numbers = [13.4, 14.5, 15.0, 23.2, 30.9, 31.3, 32.9, 35.1, 34.3];

void setup() {
  //instructions in here happen once when the sketch is run
  size(500,500,P3D);
  background(0);
  
}

void draw() {
  //instructions in here happen over and over again
  background(0);
  //turn on the lights so that we see shading on the 3D objects
  lights();
  //move to the center of the sketch before we draw our graph
  translate(width/2, height/2);
  //Tilt about 70 degrees on the X axis - like tilting a frame on the wall so that it's on a table
  rotateX(1.2);
  //Now, spin around the Z axis as the mouse moves. Like spinning that frame on the table around its center
  rotateZ(map(mouseX, 0, width, 0, TWO_PI));
  for (var i = 0; i < numbers.length; i++) {
    //calculate the amount of green in the colour by mapping the number to 255 (255 red &amp; 255 green = yellow)
    var c = map(numbers[i], min(numbers), max(numbers), 0, 255);
    fill(255, c, 0);
    //calculate the height of the bar by mapping the number to the half the width of the screen minus 50 pixels
    var w = map(numbers[i], 0, max(numbers), 0, width/2 - 50);
    //move out 200 pixels from the center
  	pushMatrix();
  		translate(200, 0);
  		box(20,20,w);
  	popMatrix();
  	//after we draw each bar, turn the sketch a bit
  	rotate(TWO_PI/numbers.length);
  }
}

Thirty lines of code, and we have an interactive, 3D data visualization (you can see this sketch running inline in the header). Now, I understand that this isn’t the best use of 3D, and that we’re only visualizing a basic dataset, but the point here is to start to imagine the possibilities of using Processing with Javascript in the browser.

For starters, Javascript lets us very easily import data from any JSON source with no parsing whatsoever – this means we can create interactive graphics that load dynamic data on the fly (more on this in the next tutorial). On top of that, because we can write Javascript inline with our Processing code, and we can integrate external Javascript functions and libraries (lie JQuery) to our sketches, we can incorporate all kinds of advanced interactivity.

I suspect that the release of Processing 2.0 will lead to a burst of Javascript-based visualization projects being released on the web. In the meantime, we can use the basic template from this tutorial to work with Processing.js – have fun!

All The Names: Algorithmic Design and the 9/11 Memorial

In late October, 2009, I received an e-mail from Jake Barton from Local Projects, titled plainly ‘Potential Freelance Job’. I read the e-mail, and responded with a reply in two parts: First, I would love to work on the project. Second, I wasn’t sure that it could be done.

The project was to design an algorithm for placement of names on the 9/11 memorial in New York City. In architect Michael Arad‘s vision for the memorial, the names were to be laid according to where people were and who they were with when they died – not alphabetical, nor placed in a grid. Inscribed in bronze parapets, almost three thousand names would stream seamlessly around the memorial pools. Underneath this river of names, though, an arrangement would provide a meaningful framework; one which allows the names of family and friends to exist together. Victims would be linked through what Arad terms ‘meaningful adjacencies’ – connections that would reflect friendships, family bonds, and acts of heroism. through these connections, the memorial becomes a permanent embodiment of not only the many individual victims, but also of the relationships that were part of their lives before those tragic events.

Over several years, staff at the 9/11 Memorial Foundation undertook the painstaking process of collecting adjacency requests from the next of kin of the victims, creating a massive database of requested linkages. Often, several requests were made for each victim. There were more than one thousand adjacency requests in total, a complicated system of connections that all had to be addressed in the final arrangement. In mathematical terms, finding a layout that satisfied as many of these adjacency requests as possible is an optimization problem – a problem of finding the best solution among a myriad of possible ones. To solve this problem and to produce a layout that would give the Memorial Designers a structure to base their final arrangement of the names upon, we built a software tool in two parts: First, an arrangement algorithm that optimized this adjacency problem to find the best possible solution. And second, an interactive tool that allowed for human adjustment of the computer-generated layout.

The Algorithm

The solution for producing a solved layout for the names arrangement sat at the bottom of a precariously balanced stack of complex requirements.

First, there was the basic spatial problem – the names for each pool had to fit, evenly, into a set of 76 panels (18 panels per side plus one corner). 12 of these panels were irregularly shaped (the corners and the panels adjacent to the corners, as seen in the image at the top of this post). Because the names were to appear in one continuous flowing group around each pool, some names had to overlap between panels, crossing a thin, invisible expansion joint between the metal plates. This expansion joint was small enough it would fit in the space between many of the first and last names (or middle initials), but with certain combinations of letterforms (for example, a last name starting with a J, or a first name ending with a y), names were unable to cross this gap. As a result, the algorithm had to consider the typography of each name as it was placed into the layout.

Of course, the most challenging problem was to place the names within the panels while satisfying as many of the requested adjacencies as possible. There were more than 1200 adjacency requests that needed to be addressed. One of the first things that I did was to get an idea of how complex this network of relations was by building some radial maps:

Clearly, there was a dense set of relations that needed to be considered. On top of that, the algorithm needed to be aware of the physical form of each of the names, since longer names could offer more space for adjacency than smaller ones. Because each person could have more than one adjacency request, there were groups of victims who were linked together into large clusters of relationship – the largest one of these contained more than 70 names. Here is a map of the adjacency clusters within one of the major groupings:

On top of these crucial links between victim names, there were a set of larger groupings in which the names were required to be placed: affiliations (usually companies), and sub-affiliations (usually departments within companies). To complicate matters, many of the adjacency requests linked people in different affiliations and subaffiliations, so the ordering of these groupings had to be calculated in order to satisfy both the adjacency requests and the higher-level relationships.

At some point I plan on detailing the inner workings of the algorithm (built in Processing). For the time being, I’ll say that the total process is in fact a combination of several smaller routines: first, a clustering routine to make discrete sets of names in which the adjacency requests are satisfied. Second, a space filling process which places the clusters into the panels and fills available space with names from the appropriate groupings. Finally, there is a placement routine which manages the cross-panel names, and adjusts spacing within and between panels.

The end result from the algorithm is a layout which completes as many of the adjacency requests as completely as possible. With this system, we were able to produce layouts which satisfied more than 98% of the requested adjacencies.

The Tool

Early on in the project, it became clear that the final layout for the names arrangement would not come directly from the algorithm. While the computerized system was able to solve the logistical problems underlying the arrangement, it was not as good at addressing the myriad of aesthetic concerns. The final layout had to be reviewed by hand – the architects needed to be able to meticulously adjust spacing and placement so that the final layout would be precisely as they wanted it. With this in mind, we built a custom software tool which allowed the memorial team to make custom changes to the layout, while still keeping track of all of the adjacencies.

The tool, again built in Processing, allowed users to view the layouts in different modes, easily move names within and between panels, get overall statistics about adjacency fulfillment, and export SVG versions of the entire system for micro-level adjustments in Adobe Illustrator. In the image above, we see an early layout for the South Pool. The colouring here represents two levels of grouping within the names. Main colour blocks represent affiliations, while shading within those colour blocks represents sub-affiliations. Below, we see a close-up view of a single panel, with several names placed in the ‘drawer’ below. Names can be placed in that space when they need to be moved from one panel to another. Note that adjacency indications remain active while the names are in the drawer.

Other features were built in to make the process of finalizing the layout as easy as possible: users could search for individual names, as well as affiliations and sub-affiliations; a change-tracking system allowed users to see how a layout had changed over multiple saved versions, and a variety of interface options allowed for precise placement of names within panels. Here is a video of the tool in use, again using a preliminary layout for the South Pool:

Computer Assisted Design

It would be misleading to say that the layout for the final memorial was produced by an algorithm. Rather, the underlying framework of the arrangement was solved by the algorithm, and humans used that framework to design the final result. This is, I think, a perfect example of something that I’ve believed in for a long time: we should let computers do what computers do best, and let humans do what humans do best. In this case, the computer was able to evaluate millions of possible solutions for the layout, manage a complex relational system, and track a large set of important variables and measurements. Humans, on the other hand, could focus on aesthetic and compositional choices. It would have been very hard (or impossible) for humans to do what the computer did. At the same time, it would have been very difficult to program the computer to handle the tasks that were completed with such dedication and precision by the architects and the memorial team.

The Weight of Data

This post has been a technical documentation of a very challenging project. Of course, on a emotional level, the project was also incredibly demanding. I was very aware throughout the process that the names that I was working with were the names of fathers and sons, mothers, daughters, best friends, and lovers.

Lawrence Davidson. Theresa Munson. Muhammadou Jawara. Nobuhiro Hayatsu. Hernando Salas. Mary Trentini.

In the days and months that I worked on the arrangement algorithm and the placement tool, I found myself frequently speaking these names out loud. Though I didn’t personally know anyone who was killed that day, I would come to feel a connection to each of them.

This project was a very real reminder that information carries weight. While names of the dead may be the heaviest data of all, almost every number or word we work with bears some link to a significant piece of the real world. It’s easy to download a data set – census information, earthquake records, homelessness figures - and forget that the numbers represent real lives. As designers, artists, and researchers, we always need to consider the true source of data, and the moral responsibility which they carry.

Ultimately, my role in the construction of the memorial was a small, and largely invisible one. Rightly, visitors to the site will never know that an algorithm assisted in its design. Still, it is rewarding to think that my work with Local Projects has helped to add meaning to such an important monument.

The 9/11 Memorial will be dedicated on September 11, 2011 and opens to the public with the reservation of a visitor pass on September 12th.