138 Years of Popular Science (2011)
In the Fall of 2011, I was asked by the publishers of Popular Science magazine to produce a visualization piece that explored the archive of their publication. PopSci has a history that spans almost 140 years, so I knew there would be plenty of material to draw from. Working with Mark Hansen, I ended up making a graphic that showed how different technical and cultural terms have come in and out of use in the magazine since its inception.
Project Cascade (2010 – 2011)
Project Cascade is a project that visualizes the sharing activity of New York Times content over social networks. Built at The New York Times Company Research and Development Lab, Cascade is an interactive, exploratory tool that is presented in an number of environments – including a 5-screen video wall. Accessing a constantly-updated database of sharing events, the system constructs sharing structures called ‘cascades’ in near-real time. These cascades can be analyzed and explored through a novel, three-dimensional interface.
Random Number Multiples (2011)
Two prints from the inaugural edition of her Random Number Multiple series – a project that produces screenprints from the work of computational artists and designers. Both are abstractions of my word frequency visualizations that I created using Processing and the NYTimes Article Search API. The first, titled ‘RGB – NYT Word Frequency’, shows usage of the words ‘red’, ‘green’, ‘blue’ in the Times between 1981 and 2011. My second print visualizes the terms ‘hope’ and ‘crisis’ over the same time period. Prints are available for purchase at the Random Number Multiples site.
Sustained Silent Reading (2010)
Sustained Silent Reading was a 2010 installation at the Gottesman Library at Columbia University. This project was built while I was the artist-in-residence at EdLab, at Teachers College. Sited in a tucked-away corner of the library, the system uses semantic analysis to ‘read’ through a base of content.
Wired UK, August 2010 (2010)
This print piece looks at a very, very big data set – cellular phone records from a pool of 10 million users in an anonymous European country. This data came (under a very strict layer of confidentiality) from Barabási Lab in Boston, where they have been using this information to find out some fascinating things about human mobility patterns.
Haiti Earthquake aid – in Avatar minutes (2010)
In early 2010, the Guardian’s data store released a list showing how much different countries and organizations have pledged to the Haiti eathquake aid effort. I built a visualization tool to turn these numbers into something real – first, I asked how much money was being spent per citizen of these countries. Then I took that figure and converted it to Avatar minutes: how many minutes of Avatar would this earthquake aid pay for?
During the Olympic Games in Vancouver in 2010, tens of thousands of people descended on Granville Island every day. Already one of the most photographed locations in the Vancouver, the Island was shot from every angle: images from tourists’ cameras and cell-phones mixed with surveillance and media images to document every moment from every conceivable perspective. In this photo-saturated environment, who played the observer? Who was the observed? Code.lab was a publicly-sited art project that asks visitors to consider these questions and their often ambiguous answers. A combination of pedagogy, performance, and interactive installation, Code.lab was a unique collaboration between artists, students, and the public.
9/11 Memorial Names Arrangement Algorithm & Placement Tool (2010)
Working with Local Projects, I designed an algorithm and an accompanying software tool to aid in the placement of the nearly 3,000 names on the 9/11 Memorial in Manhattan. Built in Processing, these computational tools allowed the designers of the memorial to satisfy the nearly 1,500 ‘meaningful adjacency’ requests made by family members of 9/11 victims.
Two Sides of the Same Story (2009)
Built in Processing, this tool allows for free comparison of any two bodies of text. Words are positioned relative to their occurrence in each article. By clicking on individual words you can see how they were used within the context of the text. Released as an open source project in 2010, it is available for download from the ‘Source Code & Tutorials’ link at the top of this page.
Wired UK, July 2009 (2009)
This two-page print piece examines data associated with the UK’s National DNA Database – the largest forensic database in the world. Using a series of generative graphics, the piece investigates the discrepancies between the demographics of the database and of the UK population in general.
Good Morning! (2009)
GoodMorning! is a Twitter visualization tool that shows about 11,000 ‘good morning’ tweets over a 24 hour period, rendering a simple sample of Twitter activity around the globe. The tweets are colour-coded: green blocks are early tweets, orange ones are around 9am, and red tweets are later in the morning. Black blocks are ‘out of time’ tweets which said good morning (or a non-english equivalent) at a strange time in the day.
Just Landed (2009)
Just Landed finds tweets containing the phrases ‘just landed in…’ and ‘just arrived in…’ and provides map-based visualization of these tweets over time. Built in Processing, the tool poses a model for determining human mobility patterns from social media ‘exhaust’.
NYTimes: 365/360 (2009)
Built in Processing, this set of visualizations shows the top organizations and personalities for every year from 1985 to 2001, by occurrence in the New York Times. Connections between these people & organizations are indicated by lines.
Glocal Image Breeder (2008)
The Glocal Image breeder is an experimental search tool for a database of 8,000 images. Using genetic algorithms, the system can suggest images from the database which could conceivably be ‘children’ of any two images that the user suggests. Starting with these ‘adam and eve’ images, the system creates imagined phylogenies of relationship within the glocal image database.