INTERP

  • A project by Jeff Thompson
  • Commissioned by Turbulence

INTERP is a series of digital sculptures generated by blending 100 unrelated photographs, placing them into simulated three-dimensional space, and importing them into photogrammetry software, tricking it into thinking the photographs are of a single object.

Every photograph in my library (approximately 12,000 images when the project began in 2012) were used as the input data set. In much of my work, I am interested in "useless" and culturally-derived data sets, so rather than use an arbitrary archive of photographs (a Google image search for a particular term, for example), it seemed more natural to use a finite set that I had generated.

HOW THE MODELS WERE MADE

In order to merge the photographs into 3D models using PhotoScan, the photogrammetry software used (similar to the popular 123D Catch but offline and more flexible), the images had to be processed so the software would be tricked into thinking they were of the same object. After several tests, it appears that groups of 100 images that are faded into each other work the best.

A photograph, placed in a false 3D room.

A photograph, placed in a false 3D room.

This process was automated using series of small programs written in Processing (see the project GitHub page for full source code). The images were interpolated one into another and saved into separate folders. Because of the way photogrammetry works, a background is required for the software to triangulate the position of the camera. For this reason, the interpolated photographs were placed into a fake 3D room and the "room" rotated a full 360° over the course of the 100 images.

Point cloud created by PhotoScan.

Point cloud created by PhotoScan.

The resulting images were then fed into PhotoScan for camera alignment, meshing into a 3D surface, and the creation of a texture file which overlays cut-up bits of the original photographs onto the model. The resulting 3D files were rather large (approximately 60MB each with texture files), so the models' polygon count was reduced and the texture file compressed, accomplished using a mix of Python/Blender automation, Photoshop batch processing, and Processing sketches. While the resulting quality is lower, the models load much faster for web viewing.

STATISTICS AND CLASSIFICATION

The resulting models include statistical data, which can also be used to sort and filter the entire set. These include the volume of the model in cubic millimeters, the average hue and brightness of the texture (does not include gray, blank areas), and the year the source photographs were taken.

Model classifications: blob, satellite, box, wing, exploding.

Model classifications (clockwise): blob, satellite, box, wing, exploding.

Additionally, the resulting models displayed a surprising conformity to several basic classifications. Named for their resemblance to real-world structures, the classifications are:

  • blob: rounded and bulbous shapes
  • satellite: shapes divided into multiple parts, often orbiting a main shape like a moon
  • box: rectangular shapes, usually the vestige of the fake 3D room
  • wing: flat shapes, usually with rounded ends and rough surfaces
  • exploding: shapes that emanate out from a central point

SOURCE CODE, LICENSE

All images, models, and source code for this project released under a Creative Commons, BY-NC-SA license - feel free to use but please let me know.

THANK YOU

INTERP is a 2013 commission of New Radio and Performing Arts, Inc for its Turbulence.org website, made possible with funding from the National Endowment for the Arts. This project would not have been possible without their generous support.

Also thanks to the makers of Lato, Font Awesome, Three.js.

www.jeffreythompson.org