Interview: Jason Freeman

freeman2.jpgJason Freeman received his B.A. in music from Yale University and his M.A. and D.M.A. in composition from Columbia University. He is currently an assistant professor of music at the Georgia Institute of Technology in Atlanta. The recipient of numerous awards, including two Turbulence commissions, one for N.A.G. in 2003, the other for Graph Theory in 2006, and a 2005 Rhizome commission for iTunes Signature Maker, Freeman’s works have been performed all over the world.

What his works have in common is that they break down conventional barriers between composers, performers, and listeners, using new technology and unconventional notation to turn audiences and musicians into compositional collaborators.

Helen Thorington: Jason, your work Glimmer was one of the first in which you engaged the concert audience as musical collaborators. Would you tell us about this? Did you compose a score and if not, what did you do? How did you feel about giving up control of the performance to the audience? Were you pleased with the performance? Or was the performance less important to you than providing a way for the concert audience to experience making music?

What did you learn from this experience? What would you do differently if you had the opportunity to produce Glimmer again?

Jason Freeman: Before I answer this question, I want to propose a (personal) definition of networked music so we have a framework within which to consider this work.

In one sense, almost all music is networked music: whenever musicians play together, their eyes and ears are connected by a complex, real-time network of aural and visual signals that have a tremendous impact on what they play and how they play it. And musicians are usually part of a second network as well, which connects them back to the composer who created the score and the listeners who hear the performance (or a recording of it).

That formulation, of course, is too broad to be particularly useful. So here is a more restricted version: networked music is music in which we consciously manipulate, transform, or mediate the connections between performing musicians and/or between the composer, performers, and listeners.

Now on to Glimmer. The American Composers Orchestra asked me to write a piece for chamber orchestra that used technology and was fun. I enjoy orchestral concerts, but “fun” isn’t a word I normally associate with them; I usually think of sitting in a dark hall, watching a conductor whose back is turned towards me, hoping not to cough or sneeze or applaud in the wrong place by mistake. So I decided that if I involved the audience in actively shaping the performance, I could subvert some of these formal aspects of orchestral performances and make things a little more fun.

The resulting piece used light to create network connections between composer, audience, and performers that do not exist in typical orchestral performances. Each of the 600 audience members was given a light stick. Four video cameras captured images of the audience, and a video analysis computer analyzed those images to generate data describing their activities. A second computer used that data to generate a musical score for the musicians, and it changed the colors of lights mounted on each musician’s music stand to tell them what notes to play and when and how loud to play them. There was no conductor.

There was also no traditional musical score. Each musician’s part was a key showing how different colors, color brightnesses, and flashes of light corresponded to particular pitches, dynamics, and accents for them to play. There was an overall score for the work as well; it described the technical and conceptual architecture of the piece, went through all the logistics of mounting a performance of it, and described the overall structure of the algorithms and mappings (and their changes over time) via graphical diagrams and occasional bits of conventional music notation.

I enjoy giving up control to the audience in performances such as this one. One of the challenges I enjoy most about composing music is that there is always a gap between the musical score on the page and the experience of that score being performed. As I’ve developed as a composer and a musician, I’ve gotten better at imagining how my scores will sound, but because humans are involved in each performance, there is always a wonderful element of surprise. (This is the main reason, by the way, that composing electroacoustic music for tape has never interested me; the human aspect of live performance disappears.) In this sense, audience participation simply changes the order of magnitude of that gap between score (or software) and performance. I’m essentially opening up aspects of the score itself to change during performance based on audience activities, in addition to the interpretive freedom already exercised by the musicians.

I was pleased with the premiere performance of Glimmer in New York. It was a bit of trial by fire — my first work combining audience participation and conventional musicians, and the scope was huge (600 audience members, 25 musicians) and the logistics daunting (only 5-6 hours tech setup and a single 1-hour rehearsal with the orchestra). It was a bit of a miracle that everything worked.

When I listen to an audio recording of the performance on its own, there are moments where everything seems to magically fit together and moments where the music seems to lose momentum. But with this piece, I’m not interested in listening to the results of the performance as an independent musical object. It’s about the experience of being a part of the performance, of having a stake in how it progresses, of working with and competing against fellow audience members to exert influence upon the piece.

There were some problems, though, with the interactive design, and because of them, many audience members felt they had little influence over the music, and the video analysis data corroborated them. This was not a technical problem. It was a design problem.

I made some tweaks for a subsequent performance in Israel last spring, and the results were much better. We reduced the size of the audience from 600 to 200 people and the orchestra from 25 down to 15 members. We sat everyone in an almost theater-in-the-round configuration, so it was easier for audience members to see each other. And we asked the audience to wave their sticks back and forth rather than turn them on and off (and changed the algorithm to track that). In that performance, audience members stood up to conduct their peers and they made coordinated changes to their movements as groups. And the video analysis data we collected showed that their activities had dramatic effects on the algorithms that generated the music.

You also worked on Auracle – an “instrument” conceived by Max Neuhaus and realized collaboratively by you, C. Ramakrishnan, Kristjan Varnik, Phil Burk, and David Birchfield. What is “Auracle” and how does it work? To your knowledge, has it been successful in engaging a broad public with sound?

Jason: Auracle is a collaborative, voice-controlled networked instrument; it’s inspired by Max’s radio call-in works from the ‘60s and ‘70s. You can think of it a shared sonic environment, or a virtual aural space, or a mediated voice chat. Participants log on and start making sounds with their voice, any sounds at all. Their voice is analyzed, and the analysis data is used to drive a software synthesizer. They hear the sound generated by their own voice along with those of other participants in their “jam session.” The idea isn’t to maximize the similarity between vocal input and audio output; if we did that, we’d just have another Skype. Rather, we wanted to give participants enough control over their instrument that they could be creative and express themselves, but still keep the relationship complex enough that it would continue to interest them, and to push them towards new areas of expression, over longer periods of time.

While the project did succeed in engaging participants in a fun and creative musical exchange, it has unfortunately failed to attract a critical mass of users. The project’s success is ultimately dependent on multiple people using Auracle simultaneously to play together in real time, but because of our small user base such encounters are rare; most people use Auracle alone.

There are lots of reasons I can point to: the site’s visual design is a bit stark and uninviting; many people don’t have microphones (or don’t realize their computers have built-in mics); the software requires a somewhat tedious installation of a browser plug-in to operate; and the sound world of the project is not as accessible to a broad public as it could be. But ultimately, I think we need to look at the assumption behind the project itself, that it could attract a critical mass of users so that participants could always find other people online. I now wonder whether this is ever a reasonable assumption to make with an obscure artistic project with no corporate backing or marketing budget. Why not instead focus on projects that work equally well whether they attract 50 users per month or 50,000 users per month? I guess I’m becoming a pragmatist: if I spend months developing a project, I want participants to be able to experience its full potential regardless of external factors (like usage stats) that are largely out of my control.

Helen: One of your more recent works, Graph Theory (with Patricia Reed and Maja Cerar), approaches collaboration with your audience in a unique way, by connecting composition, listening and concert performance to an acoustic work for violin or cello and an interactive website. Tell us what users can do in this work and how what they do impacts subsequent performances of the work? Have you had many visitor contributions? Where have the works been performed?

Jason: The web component of Graph Theory is a kind of “choose your own adventure” (or hypertextual) musical structure. There are about sixty short, looping musical fragments for solo violin, and each fragment is linked to three or four other fragments of music. On the web site, users navigate among these fragments to create their own unique path through the music. Each decision they make is logged, and every night, the server tallies up the votes, so to speak, and creates a fixed, linear path through the composition based on the decisions users have made. The server then turns that fixed path into a traditionally notated musical score that violinists can print out from the web site, practice, and perform in concert (without any technology involved).

In the six months since the web site launched, we’ve logged about 6,500 user sessions on the site. That stat isn’t on the same order of magnitude as some of my other works, but since the piece isn’t dependent on reaching a huge critical mass of users to succeed, I’m perfectly happy with these numbers. And the piece has been getting performed in concert a lot lately, too, in a variety of venues (art galleries, concert halls, clubs) in several cities across the U.S.

Helen: What are you working on now? And are you still interested in collaborating with your audiences?

Jason: I’m currently working on a new piece called Flock for saxophone quartet, live video, and audience participation. It’s a collaboration with video artist Liubo Borissov, computer scientist Frank Dellaert, and several talented students at Georgia Tech: Mark Godfrey, Dan Hou, Justin Berger, and Martin Robinson. We’ll be premiering it in December at the new Carnival Center for the Performing Arts in Miami, during Art Basel / Miami.

This piece follows directly from Glimmer and tries to address a number of limitations and frustrations from that work. The people-scale is smaller (saxophone quartet + 75 audience members) but the time-scale is larger (a full evening).

The interface is more physical. The audience and the musicians get out of their seats and move around the performance space; ceiling-mounted video cameras and computer vision software track their locations and use that data to generate scores for each saxophonist, which are beamed to them on PDA displays mounted on their instruments. I’m also working with jazz musicians and taking advantage of their strong improvisational skills; the scores use graphical and conventional notation to guide their playing but do not always dictate each and every note to play.

Helen: How do new technologies lend themselves to your mode of composing/performing?

Jason: New technologies add more sophisticated tools to enable more flexibility in the ways I transform and mediate the connections between audiences, performers, and composers. I can create new kinds of interfaces for untrained musicians to make musical contributions. I can link the aural and visual domains. I can design algorithms to mediate the connections among participants. And I can move collaborations beyond a single place and a single moment in time.

Helen: What do you hope to teach young composers?

Jason: I try not to come in with any specific agendas when I work with composition students. I hope to help them acquire the technical and musical skills to write the music they want to write, to help them develop their capacity for self-criticism, to increase their awareness of the historical context of their work, and to push them to set ambitious goals that stretch the boundaries of their artistic practice.

Helen: Is it important that young composers study computer science to be able to design their own software?

Jason: Most colleges require students to fulfill certain course requirements in mathematics, foreign languages, English, etc. I think that all students — not just composers — should be taking basic coursework in computer science as well (and I teach at a university that happens to share that view). It’s important to understand the basics of how the software and hardware we depend upon works.

Only a handful of composers (e.g. Harry Partch) have needed to build musical instruments to realize their music; the rest of us just study orchestration and work with the instruments already in existence. Similarly, I think that only a handful of composers need to design their own software to realize their compositions; most people will find commercial software that meets their needs.

Just like with conventional instruments, it’s important to have a kind of “orchestration” study for these tools: to understand the basics of their inner workings, to learn what is easy and difficult to accomplish with each, and to develop creative and personal approaches to each tool. I get worried when I hear a piece of music and immediately know which tools were used to create it, because the composer just used some basic default settings or adapted some tutorial files and presets.

Helen: Are programming skills becoming more important than mastering musical instruments?

Jason: I’m not sure I see such a distinction between the two; live (or on-the-fly) coding is one example of how programming a laptop and performing on a traditional instrument aren’t so different from one another. It’s important to master an instrument, whether piano, electric guitar, Live, or Max/MSP. But mastering musicianship is still what matters most. The most important training I ever had was in species counterpoint; it taught me how to think about music in the horizontal and vertical dimensions and helped me develop the musical ear that guides me in everything I do.

Helen: How do you respond to skeptics who are critical of computers software, or programmers replacing composers?

Jason: Most of the skeptics I talk to aren’t upset by the use of computer software or programming in and of itself; in fact, many of them are artists and composers who use technology in their own work as well. What upsets them about my work is the way I give up control over the creative process to people who are not necessarily trained musicians and are often complete strangers.

My response to these critics is to clarify my focus in many of these works: the creative process rather than the creative product. Many of the most exciting, fulfilling, and spiritual experiences of my life have been about creating and performing music. I am trying to share the experience of those moments, not the music that resulted from them, in my own works. Charles Ives put it more eloquently than I could:

“Once a nice young man…said to Father, ‘How can you stand it to hear old John Bell…sing?’ (as he used to at Camp Meetings) Father said, ‘He is a supreme musician.’ The young man (nice and educated) was horrified — ‘Why, he sings off the key, the wrong notes and everything — and that horrible, raucous voice — it’s awful!’ Father said, ‘Watch him closely and reverently, look into his face and hear the music of the ages. Don’t pay too much attention to the sounds — for if you do, you may miss the music.”

Helen: How important is file sharing and the Internet as a distribution mechanism? What impact do you think this might have on the future of music?

Jason: The first wave of file sharing on the Internet was fascinating in what it made possible but boring in how it was primarily used (to swap pop songs). That was one of the points I wanted to make with my software artwork N.A.G., which auralized search results on the Gnutella network as they were downloaded in real time.

More recent online services such as and Pandora are going in much more interesting directions. They are starting to harness these huge databases of digitized music and listening habits to help people discover new music, to generate coherent playlists to match their musical taste and listening context, and to connect like-minded listeners together into online communities.

I hope that services such as these will eventually counterbalance the effects of commercial radio, big box stores, and large advertising budgets on the music we listen to, that they will level the playing field in music distribution between major labels, independent distributors, and self-produced artists. I think the balance of power has already started to shift just a little bit.

Helen: How do you approach traditional performance venues?

Jason: I don’t have a set approach; it varies depending on the project. You know, every once in a while I just write a piece for conventional instruments in concert performance, with no technology involved.

Helen: Are some audiences (North American, European etc.) more receptive to your work than others?

Jason: I haven’t noticed major differences across continents or other constituencies.

Helen: What would you say is the main difference between composing in the 21st century and, say, the 19th centrury?

Jason: I have this notion (probably over-simplified) that 19th-century composers had common-practice tonality to rely upon. Certainly, they could push against (and sometimes succeed in extending) the boundaries of what the system allowed, but there was a single system that Western composers worked within.

Now, there is no dominant paradigm; we are a world of endless niches. You can write for orchestra or electric guitar or electroacoustic sound; you can write tonal music, atonal music, concrete music, or noise music; you can compose music for concert performance or sound installations or direct to audio file; and you can work within just about any stylistic or aesthetic world you can imagine. It’s a daunting array of choices and it can easily paralyze you as a creator. Defining your own limits and designing your own system within which to work is now a significant part of the challenge of creating new work.

Helen: Who have your chief influences been?

Jason: Stylistically, I’ve always been most influenced by Charles Ives, who had an uncanny ability to combine the humorous and the spiritual in his work, and whose collage techniques with traditional instruments preceded electronic cut-ups by several decades. Morton Feldman and Steve Reich are also big stylistic influences. I had the same epiphany as many composers of my generation, hearing Reich’s Music for 18 Musicians for the first time as a teenager and realizing that I wanted to write music myself.

In terms of approaches to technology in music, John Cage’s Imaginary Landscapes, Max Neuhaus’ installations and radio works, Alvin Lucier’s I Am Sitting In A Room, and Tod Machover’s vision of participatory music have all been important to me.

Helen: Which artists do you admire and why?

Jason: Too many to name here, of course, but I’ll try to pick a few. I admire Amy Alexander’s use of humor and technology to explore complex issues about how we depend upon technology and how it shapes us. I think Peter Edmunds’ Swarm Sketch is wonderful example of collaborative creativity in the visual realm. Brian Whitman and Luke Dubois are both, in very different ways, creating haunting and beautiful algorithmic distillations of pop culture. And sound installations by artists such as Trimpin and Tim Hawkinson inspire me in the ways they foreground the physicality and fragility of sound and technology.

Thanks Jason!

Mar 11, 2007
Trackback URL

7 Responses

  1. Maris Plume:

    Hallo everyone!

    I’ve just uploaded my new website!
    How do you like it? Any suggestions?


  2. flock - sound microscope / jason freeman:

    […] 1. Interactivity in Flock – Jason Freeman, composeras part of iSAW’s Understanding Sound Seriesin partnership with the Carnival Center for thePerforming ArtsComposer Jason Freeman will discuss the ideas and intricaciesbehind the making of his interactive piece.Lecture FactsLocation:Peacock Education CenterCarnival Center for the Performing Arts1300 Biscayne Boulevard, MiamiDate and time:Monday, December 3, 2007 / 7:30 PM< Admission:<Free and open to the public2. Flock – a world premiere by Jason FreemanIn Flock, you, the members of the audience, will play a central rolein creating a ground-breaking world premiere event, along withcomputers, dancers, and a quartet of saxophones, that coolest of allcool musical instruments. A MIAMI MADE Carnival Center commission, Flock is designed by composer Jason Freeman to make new connections between composers, performers, and audiences, while the piece takesoff in a new direction every night, each time a new flock of spectators becomes part of the collaboration.This is your chance to participate in the process of making art; you will be amazed at howentertaining that can be.This performance is a MIAMI MADE Carnival Center commission. Additional support for Flock has been provided by the Funding ArtsNetwork, Inc., the Georgia Tech Foundation, and the GVU Center atGeorgia Tech. Generously underwritten by Mitchell & Elizabeth A.Taylor (12/7, evening performance); Ms. Dale Moses; Mr. and Mrs.Hebert A. Tobin (12/8, evening performance).Flock Facts:Location:Studio TheaterCarnival Center for the Performing Arts1300 Biscayne Boulevard, MiamiDates & Times:Thursday, December 6, 2007 / 8 pmFriday, December 7, 2007 / 10 am & 8 pmSaturday, December 8, 2007 / 2 pm & 8 pmAdmission: $25For tickets, visit or call (305) 949-6722.  Group discounts available (15 or more) contact (786) 468-2326. 3. Sound MicroscopeAfter some preliminary planning at isaw’s interdisciplinary sound studio, Jason will return to Georgia and work on the design of his Sound Microscope, scheduled for its final web release on April 15, 2008. A workshop will follow at a later date (tba). Interview with Jason Freeman by Helen Thorington […]

  3. flock / jason freeman:

    […] Interview with Jason Freeman by Helen Thorington […]

  4. sound microscope - jason freeman:

    […] Interview with Jason Freeman by Helen Thorington […]

  5. » Blog Archive » INTERVIEW: Networked Music Review, Jason Freeman:

    […] This interview is republished in collaboration with It was released in Networked Music Review on 03/11/07 . Only the text is reproduced here. To hear audio related to this interview, please go to networked_music_review/2007/ 03/11/interview-jason-freeman/ […]

  6. Networked_Performance — NMR Commission: "Flou" by Jason Freeman, et al:

    […] Tech in Atlanta, where he teaches in the Music Department in the College of Architecture. [Read an interview with […]

  7. Ralph Lichtensteiger:

    dear All,

    new web project by Ralph Lichtensteiger, check it out at



Leave a comment


Current interview:
Robin Meier, Ali Momeni and the sound of insects

Previous Interviews:


livestage music sound performance calls + opps installation audio/visual radio festival instrument networked audio interactive experimental electronic workshop video participatory writings event mobile exhibition concert live collaboration electroacoustic environment nature reblog distributed soundscape field recording net_music_weekly improvisation software history locative media space public noise recording immersion voice acoustic sonification lecture generative conference body tool sound sculpture net art art + science VJ/DJ light diy remix site-specific perception mapping film visualization listening laptop algorithmic multimedia city urban data wearable architecture open source game virtual biotechnology sound walk spatialization webcast hacktivism robotic image score platform electromagnetic new media cinema ecology found news composer telematic interface streaming residency interviews/other sensor dance circuit bending synesthesia physical political notation intervention object controller broadcasts conversation narrative second life responsive mashup place technology ambient social network symposium motion tracking hybrid intermedia augmented spoken word livecoding text phonography auralization acousmatic upgrade! gesture opera aesthetics mixed reality resource theory processing 8bit orchestra nmr_commission wireless device toy wireless network theater web 2.0 presentation community surveillance p2p 3D copyright soundtrack research podcast sample feedback psychogeography social chance interdisciplinary tactile recycle interview language systems code emergence presence cassette privacy free/libre software media play chiptune newsletter place-specific archives avatar education haptics activist surround sound audio tour glitch hardware tactical identity bioart asynchronous business tv tangible composition animation jazz transmission arts apps tag e-literature collective microsound relational synchronous Artificial Intelligence conductor convergence reuse simulation ubiquitous synthesizers im/material
3D 8bit acousmatic acoustic activist aesthetics algorithmic ambient animation apps architecture archives art + science Artificial Intelligence asynchronous audio audio/visual audio tour augmented auralization avatar bioart biotechnology body broadcasts business calls + opps cassette chance chiptune cinema circuit bending city code collaboration collective community composer composition concert conductor conference controller convergence conversation copyright dance data distributed diy e-literature ecology education electroacoustic electromagnetic electronic emergence environment event exhibition experimental feedback festival field recording film found free/libre software game generative gesture glitch hacktivism haptics hardware history hybrid identity im/material image immersion improvisation installation instrument interactive interdisciplinary interface intermedia intervention interview interviews/other jazz language laptop lecture light listening live livecoding livestage locative media mapping mashup media microsound mixed reality mobile motion tracking multimedia music narrative nature net art networked net_music_weekly new media news newsletter nmr_commission noise notation object open source opera orchestra p2p participatory perception performance phonography physical place place-specific platform play podcast political presence presentation privacy processing psychogeography public radio reblog recording recycle relational remix research residency resource responsive reuse robotic sample score second life sensor simulation site-specific social social network software sonification sound soundscape sound sculpture soundtrack sound walk space spatialization spoken word streaming surround sound surveillance symposium synchronous synesthesia synthesizers systems tactical tactile tag tangible technology telematic text theater theory tool toy transmission arts tv ubiquitous upgrade! urban video virtual visualization VJ/DJ voice wearable web 2.0 webcast wireless device wireless network workshop writings





Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul

What is this?

Networked_Music_Review (NMR) is a research blog that focuses on emerging networked musical explorations.


NMR Commissions

NMR commissioned the following artists to create new sound art works. More...
More NMR Commissions


"Two Trains" by Data-Driven DJ aka Brian Foo

Two Trains: Sonification of Income Inequality on the NYC Subway by Data-Driven DJ aka Brian Foo: The goal of this song is to emulate a ride on the New York City Subway's 2 Train ... Read more
Previous N_M_Weeklies


Guest Bloggers:


Massachusetts Cultural Council
Networked: a (networked_book) about (networked_art)
New American Radio
New Radio and Performing Arts, Inc.
New York State Council on the Arts, a State agency
New York State Music Fund
Upgrade! Boston

Turbulence Works