Networked_Music_Review

Interview: Scot Gresham-Lancaster

scott-copy.pngScot Gresham-Lancaster is a composer, performer, instrument builder and educator. He is dedicated to research and performance using the expanding capabilities of computer networks for musical and cross discipline expression. He studied with Philip Ianni, Roy Harris, Darius Milhaud, John Chowning, Robert Ashley, Terry Riley, “Blue” Gene Tyranny and Jack Jarret, among others. Gresham-Lancaster has been a composer in residence at Mills College and he has been developing new families of controllers at STEIM, Amsterdam. He has toured and recorded as a member of the HUB and has performed the music of Alvin Curran, Pauline Oliveros, John Zorn, and John Cage, under their direction. Gresham-Lancaster has also worked as a technical assistant to Lou Harrison, Iannis Xenakis, David Tudor among many others. http://o-art.org/Scot; http://myspace.com/scotgl; blog: http://scotgl.blogspot.com/.

Helen Thorington: Welcome Scot. You were a member of the computer network band, the HUB, and an early pioneer of computer networked music. Tell us about the HUB and the kind of work you, John Bischoff, Tim Perkis, Chris Brown, Mark Trayle and Phil Stone did at that time.

Scot Gresham-Lancaster: The first Computer Network Music grew out of an underground new music scene that developed around the San Francisco Bay Area most specifically Mills College and the Center for Contemporary Music. There are several resources that might be of interest to those readers that want to investigate the historical background. One of the progenitors of this work, the late Jim Horton, was crippled with arthritis at the end of his life, but he spent the last years collating a “History of Experimental Music in Northern California” that covers a lot of material leading to what eventually emerged as the early HUB. That history is housed on my website at http://o-art.org/history. Of particular interest are Jime Horton’s biographical notes alluded to on the front page of the site. They really set up a context for understanding where the early inspiration for computer network music came from.

The earliest work in this area was done because of the availability of the first affordable Singel Board Computers in the mid 1970’s. For the first time individual composers that were unaffiliated with university computer centers could get access to and program computers for making music. This was mostly done in assembly language and with square waves from the parallel port of these early computers. The first group in this genre was the “League of Automatic Music Composers” with Jim, John Bischoff and Rich Gold in the first year and Tim Perkis joining in very soon on. Don Day and David Behrman were also involved in some of the early concerts. Rich Gold is another early figure that I would like to take a minute to acknowledge, he was a brilliant theorist and thinker, that like Jim, “passed on” way to early. A rare unpublished last manuscript of Rich’s “The Plenitude” will soon be available from MIT press under the full title “The Plenitude: Creativity, Innovation & Making Stuff (Simplicity: Design, Technology, Business, Life)” by Rich Gold (Author), John Maeda (Foreword). Rich was distributing this as a CD-ROM business card in the last year of his life and it’s a fascinating read.

An excellent history of the entire development of the first two generations of the Hub was written by Chris Brown and John Bischoff (with editorial and content support from the rest of us HUB members as part of the “Crossfade Project, funded by the ZKM, SFMOMA, Goethe Institute and housed as part of the Walker Art Center website.

hub2.png
HUB publicity photo from 1985

The early HUB work was very much like what is now called “laptop music”, but laptops didn’t exist, yet. I still have back troubles directly related to carrying around 2 60 lb. flight cases on our early European tours. We used midi and hand built electronics for sound production. Early on we all used various personal computers. John, Mark, and Phil used Amigas, Chris and I used the old beige Mac and Tim used a Radio Shack TRS 80 for a while and then he was the first to get a Toshiba laptop. From the period of the first HUB gigs in 1985 until 1990 we used two Synertek 6502 based single board computers that talked to each other over RS-232 at 300 baud. We all had a shared “scratch pad” of memory that we could retrieve and alter numbers from 0-255. A whopping 1 K of memory was all we had. If you go to the hub.artifact.com site you can hear the first album, which was made using this technology.

In 1990, guided by Tim Perkis’ insight, I hacked an OpCode Studio V midi interface to make a midi based HUB. In this context we didn’t have a shared memory, but were dynamically trading messages in real time over MIDI. If I got a MIDI message on midi channel 2, I knew it was from John, from channel 5 it was Phil, etc. Our second CD “Wrecking Ball” was done using this technology, and for seven years we toured all over the place making all sorts of new work and investigating the techniques that this new way of organizing music allowed, but by 1997 we had sort of played it out. It had stopped being as fun and seemed a little too much like a chore to do HUB music. Also, concert promoters always wanted us to play in separate sites over the wire and that really missed the point of what was truly interesting about the way the music behaved in a closed network on the same stage, which is an entirely different thing.

In ’97 we all sort of went on to do our own projects. I was doing these telematic performances with dancers and remote musicians, Chris Brown made a series of network pieces where he provided the technology and got people to play his code. Tim was breaking out as the first call electronic music improviser in the burgeoning SF Bay improviser’s scene. (Here’s a video of NOISY PEOPLE, a feature length video documentary on this tightly-knit group of unusual sound artists and musicians from the San Francisco improvisational music community).

Mark Trayle was down at the CIA in Valencia and became interested in using audience member’s credit cards for musical control. Tim, Chris and I did “fuzzybunny” for a while, the ‘carnellectual trio’ as one French critic called us. It was sort of live plunderphonics ala John Oswald.

fuzzy.png
fuzzybunny Photo manipulation by Tim Perkis

John Bischoff was doing his usual brilliant solo work and Phil was working with Laura Pawel Dance Company in NYC. In other words, we sort of all went our own way in 1997. It seemed like it was over.

Helen: But the HUB has been performing recently, hasn’t it? I believe there was a HUB tour in 2006. Do you still make use of the same ensemble model? i.e., do you still interconnect your computers and each of you perform different parts of the composition and react to one another?

Scot: Actually in 2004 Guy Van Belle, another amazing network music artist from Belgium and Amsterdam contacted us to reform and do a headline performance at the Dutch Electronic Arts Festival DEAF 2004. We decided to reform and give it a shot since a lot of new stuff had happened and we had become interested in expanding the technology. The biggest innovation of this third generation HUB is using CNMAT’s Open Sound Control OSC. OSC as a way to communicate musical information with agreed upon arguments such as /myname /yourname /argument (some floating point number) That translates to /Scot /Chris /freq /440.0045 and would be Chris telling me that he was giving me a frequency argument of 440.045 cycles per second. This was so much hipper than MIDI because it was open. Of course OSC has become very popular over the last few years, but in 2003 and 2004 when Chris Brown and I were first working with Matt Wright form CNMAT to do this, it was absolutely pre-beta … as usual for us. Chris Brown was working on a new piece for the nascent ReacTable . As part of that Chris discovered Ross Bencina’s OSCgroups . This was a great help because the server basically fills in all the addresses of every participant and saves the trouble of needing to get everyone’s internet address in order to get OSC messages from them. It is a great time saver. Anything to get directly to making music is a great help.

We got it to work and so the 3rd generation HUB was all internet based using OSC for message passing. In line with both Chris and my previous work using performers at a distance, Tadashi Usami from Tokyo and Doug Van Nort from Montreal joined in on a piece using supercollider that Chris had put together and Jean Marc Montera from Marseilles of GRIM (Groupe de Recherche et d’Improvisation Musicales) joined in on my piece. Noosphere At the Global Conciousness Project have been collecting data from a global network of random event generators since August, 1998. The network has grown to about 65 host sites around the world running custom software that reads the output of physical random number generators and records a 200-bit trial sum once every second, continuously over months and years. The data are transmitted over the Internet to a server in Princeton, NJ, USA. The network is accessed via the Internet and the instantaneous generation of sound or sonification of the current state of the network.

In June of 2005 we were invited by Carsten Seifarth to present a workshop at the newly formed Tesla new media laboratory at the site of the old Podewil, the famous free improv workshop of the burgeoning East German improvised music scene before the reunification. There we had the opportunity to really fine-tune the way this OSC mechanism would work and to really work out this new technology.

hubnow.png
HUB in Berlin 2005

One of the interesting developments was the resurrection of my first HUB piece “Vague Notions of Lost Form”. The instruction for this piece is essentially “make music while carrying on a chat with the other players”. When we first did it on the first HUB in the 80’s there were no such thing as chat rooms or chat clients and so we had to write them ourselves. When we played in Berlin in 1993 there was an interesting comment that stuck with me, “Where was all the sound coming from? You all looked like 6 air traffic controllers.” So in the era of stage size video projectors, we carry on an ongoing chat with each other that is projected for the entire audience to read as we play the entire set. We comment on the music, decide when to move on to the next piece, and generally make comments as we play. One time during our European tour in June 2006 in Padua, Italy noted musicologist and curator Veniero Rizzardi did on the fly criticism and explanations in Italian for the audience as we played.

The fact that we have always called ourselves a “band” and not an “electroacoustic ensemble” is telling about our aesthetic. This has always truly been a collective, and my opinion about why we did take the break in 1997 is probably directly related to the fact that we had gotten away from that spirit of cooperation that is essential for making network computer music. In our recent performance at NYU in November of 2007 the Electronic Music Foundation head and our co-sponsor for the engagement, Joel Chadabe, was the first to ever raise the question that the use of this chat mechanism was a distraction from the music itself. It was an interesting idea to engage in thinking about whether it would be an improvement to leave the audience completely in the dark about what was going on and what those six guys up there were doing. Electronic Music is inherently discorporate and therefore, I believe, extraordinary techniques like using a live chat to pull the crowd in, while less formal than a lot of “classical music” contexts, is needed to give the audience at least a hint as to what processes are in play and therefore give them a more fulfilled experience. When I had the rare opportunity to work with Iannis Xenakis in the early 80’s, I will never forget a moment when he walked up to one of the 10 loudspeaker clusters we had put in place for his performance. With his hand on the loudspeaker, he said, “This is the problem with Electronic Music”. I see the HUB and other related projects as a way to address this problem of the disembodiment created by using loudspeakers.

Anyway, we have been touring at least twice a year since 2004. Our next performance will be in Budapest as part of the “Music in the Global Village” conference, the first international conference dedicated exclusively to network music composition and performance to be held from September 6-8, 2007 in Budapest, Hungary. John Bischoff is the keynote speaker, and it is really great to see him finally getting some of the recognition he deserves as a progenitor of network music.

Also, we recorded the material for our third CD and it has just been mastered. It is ready for release later this year. I am very excited about this CD; it is truly singular and unique. Because we stick to the procedures of our pieces so fully, the music has a very distinct quality that can only be achieved with these specific network procedures we have developed and refined.

Helen: More recently you have performed in a series of distributed or co-located performances, where you collaborated with distant (other-located) dancers, video artists and musicians in networked performances. How does this differ from the HUB experience?

Scot: In 2000 July we did our first big experiment in this area “incubator:how2gather” at HyperWerk in Basel, Switzerland and CSUHayward, USA with William Thibault, Scot Gresham-Lancaster, Iren Schwatz, Kathryn Gresham-Lancaster, Sam Ashley (NY) by phone. We used Bill Thibault’s Dance gesture capture software “Grabbo” on a “Be” machine to send MIDI information from Basil to me in Hayward. We were using an early linux and the Robust Audio Tool (RAT) to send back the audio I was generating with the midi data I received. We were also using VIC to send video back and forth, but this got us thinking about doing more of this sort of thing.

scot_k.png
Kathrynn and Scot Gresham-Lancaster in Basel from California 2000

What is interesting about the HUB is the interactions generated by the local network and the way specific instructions create very specific classes of music. It is highly procedural and in that sense is a direct antecedent to the early work of Cage, Tudor, and Beherman. On the other hand, performance at a distance is an entirely different aesthetic. Ultimately, after doing these performances and thinking about it over the years, it seems to me that it has a direct relationship to the general musical category of “antiphonal music”, where musicians sing in ensembles across a distance. St. Marks in Venice and the composers Gabrelli and Schutz come to mind. When you use the Internet to interconnect two or more spaces, because of delays of light speed and general net traffic, the two sites can be thought of as being at a great distance from each other. I have often said, “There is no downbeat when doing music on the Internet”. That means that the first beat of a bar is going to be delayed before arriving at the distant site and so you must create a musical context that deals with that fact.

This set of problems is so fundamentally different than those we were dealing with with the HUB. The real excitement about this work was most evident when we introduced the dance element into the equation. Then there was something new and interesting. The distant dancers were video objects at the remote site. This was an extension of the discorporation that is present with the loudspeaker. Since they are video objects projected in the remote space but sharing the fact that the dancers are real on both ends, then the aspects of scale, focus, video feedback, etc. come into play, and the intimacy of the distant reality becomes more engaging since the action and counter action is tangible between the two sites.

This work was an offshoot of my access to the “Internet 2” very high speed Internet that I had at CSUHayward. Also, I had worked for over a decade with Pauline Oliveros on all sorts of projects over the years. In the mid 90’s we had done a realization of her conceptually brilliant work “Echoes from the Moon” and I had coordinated the technology to make it possible for individuals to use Ham radio signals to bounce their voices off the Moon. At light speed that is about 1.8 second delay, and there is something magical about knowing your voice has just reached out an touched the Moon. A full description of that piece is on Pauline’s site.

On October 29th, 2002 we did a series of piece coordinated with the Internet 2 conference at UCSC. This piece involved using the live dancer at UCSC behind a shadow screen, an ancient tradition, with a remote dancer in northern California directly effecting the audio playback by here movement. Later that same evening Mills College choreographer June Watanabe and trumpet player Jay Rizzetto in California with Pauline at RPI in Troy, NY We were using a lot of equipment to pull this off.

studio.png
Shot of the equipment needed for Exposition in 2002

In 2002 the way to get the signal between the sites was to use very expensive real time MPEG 1 encoders from Tanberg. At $6K for each end this was certainly only work that could be done in association with a major university or corporation. Neither of these situations is completely clear of the sort of independence that experimental work requires, so it was nice to have this opportunity to investigate.

In 2003, CSU Hayward, Bill Thibault and myself started working together with Pauline Oliveros and Brian Lonsway at RPI to create shared virtual architectural space for dancers and musicians. A collaborative Internet2 performance between vocalists, musicians, dancers, designers, and audience members exploring the mediated space of live internet performance. Performers from Rensselaer Polytechnic Institute in Troy, NY and Mills College in Oakland,CA will perform in a common visual and acoustic space by virtue of the high-speed internet, the Synthetic Space Environment, a research project of Rensselaer Polytechnic Institute, and the interdisciplinary Center for Immersive Technology (iCIM) at California State University, Hayward.

Finally, by late 2003 we began to try and use the iChat client to do a series of pieces with the Montevideo center in Marseilles, RPI, and iCIM. This was a major breakthrough for us, because all the technicalities of using transcoders just went away. Patrick Laffont in France introduced me to concept of using the big projection backdrop as a delayed time frame trading back and forth between sites. Like a giant babershop mirror where each frame reflects the time lost between each site, going further and further back in time.

There were three of these “AB_Time” collaborations culminating in a performance opening the Vancouver, BC NIME 2005 conference at UBC. If any readers are attempting this sort of work, make your arrangements at all ends of the planned performance and have the technology in place. My experience is that many of the technical staff associated with the theater often think they know everything they would need to know to do something in the theater, but this work only exists when everything is functioning, the video scale is set, the internet is functioning … etc. etc. There is no reason that a theater technician would have any way to have experienced the extra work needed to make this work. It takes a lot of extra preparation to get everything in place and I have repeatedly met with resistant technical aids.

Helen: Are you involved with the technology of distributed performance during performances? What technologies are you using? RealAudio? Quicktime? Other?

Scot: I have used many different technologies to stream audio and video over the internet. In the last couple of years I have been using the open source icecast server in conjunction with darkice, but lately I have found that you can get the same functionality with a much larger footprint (i.e. more users ) with Flash. Since finding that out I have been doing everything in FLASH communication server or using Jeroen Wijering’s media player. The major advantage of this technology is that 98% of browsers now days are flash enabled and so there is no problem with compatibility.

Lately, I have been digging into the open source content management system drupal and it shows real promise as a way to coordinate a social media like network music performance. It is made up of modules that have different types of functionality. Currently, I am porting Steve Bull and my ongoing “cellphone opera” framework over to drupal in an attempt to make a type of streaming module that points them instantly back addition they have made via telephone to the opera.

Helen: As a performer, how have you dealt with lag and other network issues – such as the loss of packets, extra-musical sound etc.? Have you tried to coordinate your performances, to follow a score or follow a beat, or have you mostly relied on improvisation?

Scot: I addressed some of these issues earlier, but it is an ongoing problem. I saw a recent performance at Stanford CCRMA that really deserves everyone in this field’s attention. Since Fall 2006 using Internet2 with CD quality audio in 8 channels supported by Jack Trip software (Chris Chafe) and ViMiC (Jonas Braasch) and Pauline Oliveros’ Expanded Instrument System (EIS). I would characterize this project as the current state of the art. They are experiencing bi-costal delays of around 400 –700 ms. So if you put that in terms of the speed of sound at sea level and translate that to distance or 1100 feet/second, this is something like playing in the back of a very large marching band. It is like you are 375 to 560 feet way from each other. This, again, is why I referred to “antiphonal” music before.

I did a piece this June called “Extraordinary Rendition” in a vain attempt to take the language of humanity back from these evil doers in power right now. It was a cycle that went through the core changes of Coltrane’s “Giant Steps” in all 12 keys. I wrote a Flash script that had a bouncing ball under the changes, and so when we rehearsed online, we just stayed in the same zone with each other. This seems like a technique that might work for musicians on into the future. Interested parties can try out this new form of dynamic score at: http://o-art.org/Scot/ER . Ultimately, all that is really musical is improvised, in my opinion. That does not mean there are not major procedural constraints that can be put in place, but what separates Glen Gould’s playing from another pianist playing the same Bach prelude is something that is ultimately improvised.

That being said, my work and the work of all the amazing people I have had the opportunity to work with is most interesting in the construction of these interesting constraints that using a network to create new types of work presents.

Today is June 26th, 2007 and so I decided to tune into the online webcast of the most recent work of Chris, Johanas, and Pauline: “Due to a server malfunction, this concert will not be broadcast this evening. Sorry.” How unusual … NOT!

This stuff is very hard to do. Hard like making a film or writing a symphony, but the tools and procedures are still not in place, yet. I guess that is what makes it interesting for me.

Helen: Tell us about the recent convergence project in which you and 45 other musicians came together online to prepare for three concerts in June ’07. What do you mean when you say the concerts were “the proof of concept”? How did they go?

Scot: I was the Technical Director for this project that was funded by the Music Fund of New York and administered by the Deep Listening Institute in Kingston, NY. It was proposed by composer performer Vonn New and in October 2006 they approached me with the idea of putting together a technology platform that would allow as many as 50 musicians world wide to meet together online starting in January 2007 and rehearse for concerts that would be performed in June of 2007 at three concert venues along the Hudson river. It was called the “Deep Listening Convergence”.

The biggest problem that anyone doing this sort of large scale work faces is bringing those unschooled in using network based technology up to speed. At first, I built a Flash Communication Server unit that was web based and seemed easy to use, but when I tried it on Pauline and Vonn last Fall they had nothing but trouble and they are experienced users. Then we thought of using iChat which we had used successfully before in similar situations where technically challenged users were on the other end of the line. However, after our initial survey, we found that something like 30% of the participants were using PC and iChat is Mac only. So we settled on Skype as our transport layer. While it had a lot of help files and hand holding on the site. I still did some Flash based E-Learning modules to get the inexperienced up to speed.

While we had budgeted a portion of the money for servers and software, I was able to find a plethora of online Web 2.0 style websites that really made this project possible. The online organizational tool Basecamp proved invaluable and is still being used as means of communication among the participants on a daily basis, even though the concerts have come and gone.

I also used IMEEM, which is sort of a more sophisticated version of myspace. Once we had those pieces in place, we bought a bunch of licenses for Call Recorder from http://ecamm.com as a way for Mac users to record their online rehearsals and post them on IMEEM. The PC users could use Pamela which had a free version that could record mp3’s of Skype calls for up to 15 minutes. Finally, I used 30 boxes a lot as a way to coordinate meetings online and to put dates and times tagged DLC on the website calendar, automatically.

The online rehearsal process was the real “proof of concept” aspect of this project and it was very successful. This became more and more interesting as participants got more and more familiar with this new technology and figured better ways of hooking up. One really amazing find that I made that really changed the rehearsal landscape was at Weirdstuff Warehouse in Sunnyvale, CA I found a lot, like 30 DSP based speaker phone units made by Polycomm, but branded as Sony, that had built in 15 watt speakers and omni directional microphones and the hefty feedback suppression DSP algorithm built-in. Just plug these puppies in to line in and headphone out and it is like you are talking to some one in the other room, even if they are in Italy. Music worked fantastically over these things. The beauty was that they were $400 units, but sold for $20. So we bought them all. This, of course, was Pauline’s suggestion. I never would have thought to buy all of them, but that was brilliant.

If this idea interests any readers, while these PolyComm. Units are not available I found another unit that is almost as interesting. “AuzenTech VR-Fidelity” connects via USB and makes conference calls possible using Instant Messengers or VoIP technologies services such as SKYPE. It works almost as good as the other.

The juxtaposition of the process of working online with these various ensembles and then getting together to play the music we had rehearsed together was striking. The truth is that being on stage with live musicians is really quite different than playing together in a high quality conference call. Ultimately, however, at least with the pieces that I was involved with, the process of putting the live pieces together was really aided by the fact that we had worked out a lot of the music vocabulary of each piece before we got together in person. This idea of rehearsing online with geographically distant performers for eventual live concerts was very successful and we are continuing to work with it for future projects.

Helen: I understand you are also an instrument maker and have been developing new families of controllers to be used exclusively in the live performance of electroacoustic music. Among them (I think) is a power glove, which you and other HUB members have developed and use as a midi controller. How do these objects influence your music making/performing?

Scot: I made what I called a “gesture mapping glove” back in 1982 when I was the technical director of the Center for Contemporary Music at Mills College. That was a very interesting time for me. It was in this timeframe that I helped Jim McKee make his music generating hang glider, which had ion sensors on the nose and wing tips and a single board computer programmed by future hubster Phil Stone. That was a great controller! We have all been building and rebuilding instruments for years now. In ’85 I built a large 32 channel set of computer controlled “whackers” that hit the inside of piano, sheets of metal, a bass drum, wood blocks, etc. After a concert at the San Francisco Art Council the curator asked me if I “Would hold up my little whacker for the camera?” I obliged …

While I worked on these new controllers on my own, I don’t think I would have gotten nearly as far along if it hadn’t been for the support of the STEIM organization in Amsterdam. They have been wonderfully supportive of a whole range of projects, and I would encourage any readers that have an interest in this area of research to make a proposal to STEIM. Director, Michel Waisvisz, and Nico Bes, Frank Balde, Robert van Heumen, Nic Collins, Joel Ryan have all been a great help in realizing many projects. It is such a great concept to create an environment where artists are aided to realize their vision of new media devices with the help of world-class experts.

The new generation of micro controllers has really opened up the possibilities for everyone. I am building very useful controllers for each of my various projects lately using both the Arduino board and Dan Overholt’s CUI project board.

Most recently, I have been hired by Auraphoto and I will be working on a commercial line of bio-feedback music controllers later this summer.

Helen: You’ve also been involved for some time in a project you call “The Tensegrity Harp Project,” in which you are researching “the harmonic relationships suggested by the geodesic, synergetic, and tensegrity math of R. Buckminster Fuller.” Tell us about this – where has your research taken you? (please don’t get too mathematical). What controllers have you developed?

Scot: This project has been in the hopper for years now and I am up to my 7th version of the instrument. While it started as a “tensegrity” structure, a rambunctious student that broke the single string and cost me 4 hours of rewiring one of the early versions, disavowed me of that design. Now it is an “octahedral harp” with the hypotenuse of each of the isosceles triangles at the relative length of a just intoned scale. Oops … a little mathematical, but it can’t be helped.

Master fabricator, Jorgen Brinkman, at STEIM, was invaluable in helping take that project to the next level and we continue to send email back and forth as new ideas are realized. We both agree that this work is informed by concepts of the guilds and craftsmen and is my attempt at a manifestation of a form of “Sacred Geometry’. Unfortunately, it requires close to $1k of “profane money” to buy the raw material. Now that Jerry Falwell is dead, where will I turn?

harp.png
early photo of the HARP prototype at STEIM in 2004

Those who are interested please check out: http://www.o-art.org/Scot/Harp_project_summary.htm. Donations gladly accepted.

One major work that I was able to do using an early version of the HARP was a piece for cello quartet and electronics called, “In the Unlikely Event or a Water Landing”. I had met and worked with David Tudor over the years and we happened to be at STEIM at the same time in 1994. He gave me permission to use his neural net synthesizer when I got back to California. What a fabulous instrument! Any way I used the sound files that I generated with the Neural Net Synthesizer in a Max based instrument that was controlled by the HARP.

Helen: You have an ongoing cellphone “opera” project with Stephen Bull, “Cellphonia: WET”, that was recently re-instantiated as an installation at NIME 2007 in NYC. Tell us about it. Can it still be heard?

cellp1.png
Cellphonia Posters

Scot: Yes, once we start a “Cellphone Opera” we leave it on. http://cellphone.el.net/site/ is the new site I am starting to work on. We just missed a Rhizome grant that would have fully funded this ongoing work, so it’s a little like “La Bohme” at this point. I remain optimistic. It is a great project. Each new phone call is added to the mix and therefore each call changes the output of a given opera.

The Cellphonia framework points to my current thinking about the extensions to the concepts of network music, per se. We have been running Cellphonia:San Jose since last August without a stop for over 10 months now and every time someone calls in the piece changes. So it is live streaming real time and yet also out of real time, since the opera is made from the history of people who have called in. Our recent installation at the Eyebeam gallery in Chelsea as part of NIME 2007 is also still running. It uses fragments from Terese Svboda’s libretto for the opera “Wet”.

The relationship of this sort of dynamically interactive web based work is really interesting to me lately. Of course, like most of this stuff it is almost impossible to monetize in any way without compromising its real strength and cultural independence. This is the problem with much interesting sound work, since it doesn’t exist as an object; people expect the experience to be free like the radio or something.

I think it ties this all together that one of the original partners on this project was Hubster Tim Perkis and his help really gave us the kick start to get this technology going. He and I have and will collaborate on projects for decades now, but Steve Bull is the one who really got this whole thing going and we are hoping to open it up to a more social media environment, where users can work together to create their own “cellphone opera”. Stay tuned …

Helen: Is there anything else you’d like to tell us about?

Scot: There are technical details surrounding your earlier interview with Miya Masaoka that I would like to address. I think she does really interesting work and I enjoyed working with her on the first realization of her Bee project. However, when she referred to “B code” what she seemed unclear on, was that this was Ambisonic B-Formatting that I used to try and create a “soundfield” that would hopefully make the audience feel they were inside the beehive.

Next year I am hoping to revisit “Songlines” or Terrain Reader or maybe some other name. I did this piece with Bill Thibault in the later ‘80’s early 90’s. It was an extension of the late, great Rich Gold’s ideas of using digital elevation models of land as a compositional determinate. This was very interesting work and I am hoping to have an opportunity to revisit it in the new context of a much more widely distributed and interconnected network of collaborators and kindred spirits, GPS online accessible maps etc.

Refer to…

“Experiences in Digital Terrain,” Bill Thibault and Scot Gresham-Lancaster, Leonardo Music Journal, Volume 7, MIT Press, 1997.
“Songlines.DEM,” Bill Thibault and Scot Gresham-Lancaster, Proceedings of the 1992 International Computer Music Conference, San Jose, Oct 1992.

I would like to leave the readers with a rather long piece of mine for Disclavier, pianist (Me) and interactive computer. It is called “5 tones for Slonimsky” and the recording is from a live performance on Carl Stone’s “Ears Wide Open” show in June 2002 KPFA Berkeley.

Thanks so much for inviting me and I really look forward to all the great work by everyone that your fantastic website will point me at.

Helen: And thank you, Scot, for a great interview.


Jul 7, 2007
Trackback URL

9 Responses

  1. John Campion:

    Scot is a man for all seasons.

    john campion


  2. Scot Gresham-Lancaster:

    Thanks for your vote of support, John. I would point out that John is a responsible for a very interesting new project “Medusa” that some readers may be interested in checking out and hopefully supporting:
    http://worldatuningfork.com/John/Poetry/Medusa.html


  3. peter:

    Hi Scot,

    Great interview. I’m curious to know a bit more about the procedures you mention for the latest HUB CD. My understanding has always been that the procedures, interrelationships, topologies, etc. that you developed for individual pieces were really the core compositional acts for those works, which then rendered the spaces in which the improvisations could occur. Given current technologies that allow the sharing of much more than just MIDI data, what kind of data are you actually passing between players in your HUB performances? Do you share audio data in realtime or is it mostly control data? What types of interrelationships on a group level have you found to work really well, and what types tend to fail (even though you may have expected otherwise)? Thanks!

    Peter


  4. Scot Gresham-Lancaster:

    Peter,
    Great questions! When I was doing the interview I wasn’t sure how fine a detail I should get into, but I am happy to answer these since they get to the real crux of the unique issues facing anyone trying to make this sort of music. I will adress them one at a time:
    1.) what kind of data are you actually passing between players in your HUB performances?

    During the development phase of the new OSC based protocol that we are currently using we set aside 12 distinct descriptors for our common use. I don’t have all 12 right at hand, but off the top of my head they were /pitch /tempo /duration /timbre /density /amplitude etc. Mostly parameter based.

    2.) Do you share audio data in realtime or is it mostly control data?

    We did some experiments with this and if we get an opportunity to do another research phase like we had at Tesla in 2005 this would be on the top of my list. At this point we are using information on the macro or note level and not getting down to the sample level, but this has always been something that I have wanted to work on and have experimented with in my own “virtual” testing hub configuration. The thing to realize is that there is a line between knowing what is changing in your own contribution or “playing” of the data as it comes in and what is happening as a result of the net activity. Even just using control information, if too many parameters are involved then it is impossible to make those sorts of distinctions and the ability to be a part of what is shaping the piece becomes less clear.

    3.) What types of interrelationships on a group level have you found to work really well, and what types tend to fail (even though you may have expected otherwise)?

    As I was just saying, you need surprisingly few active parameters to make for a very engaging musical experience. I think the biggest failures have been when there was too much specificity to the piece description. This makes for something that is difficult and buggy to program and tends to be locked into a single sonic area defined by that very specificity. The pieces that work the best have very simple contexts but allow for there to be unexpected surprises that could only arise in the context of network interconnectivity. This gives the musicians a chance to mold their sound to the group and still be a part of what is happening in the overall network.

    It is a surprising and sort of hard to communicate how different the performance of this sort of music is from other musical contexts, and I am sure many have never played music in a context where you didn’t know if your instrument was going to sound at a given moment and when it did sound exactly how it was going to sound. This “emergent” quality is pretty singular to this sort of musical practice. The real trick is to find the balance between network interaction and more standard electronic music performance practices, whatever those might be, that allow the players to be fully engaged in the performance and yet surprised by the reaction of the overall interconnectedness.


  5. peter:

    hi Scot,

    thanks for the great answers. your answers to part three gets at the issue i’ve been dealing with here at UVa for the last couple of years. we have an annual networked music performance called MICE (Music for Interactive Computers Ensemble). it is a live jam done by undergrads in our advanced computer music course. for the last two years (since I’ve been TA’ing the class) the jams have used around 5 - 7 network-connected macs running max/msp. this past year, i designed a patch that allowed each group to send three streams of data to the group in front and three to the group behind it (and receive three from each in return), the topology being a ring with data flowing in both directions. of course, it is really difficult to hear anything emergent happening because a) the students all build and run their own instruments which just use the network patch as a sharing conduit and b) most have little to no experience in improv, especially with this type of music. as such, it is important for us to learn from people who have been doing this for a lot longer, what actually produces interesting results. my personal belief has always been that even from the most simple systems, you can generate vastly complex and interesting forms just via the order of magnitude by which the possibilities increase each time a new node is added on the network. of course, i haven’t been able to test that yet but hopefully this year i will get the chance.

    so that all being said, do you think in terms of topologies when you build your systems? if so, have some topologies worked better than others? i’m wondering if you could maybe pick a piece that is a good example, and talk a bit about how the interactions work, what sort of data feedback there is between players, what kinds of rules or filters you institute, etc. hopefully this isn’t too technical of a question for this forum. thanks again!

    peter


  6. Scot Gresham-Lancaster:

    Peter,

    Who knows if this is too technical for the forum, but talking about this area of music is necessarily technical because, as you know, you need to build the instruments and infrastructures almost from scratch to even try these ideas out. That is a very technical task and requires really digging in. Some tools (max, pd, chuck, osc, oscgroups, etc. ) have made it easier than the early days, but not that easy.

    > do you think in terms of topologies when you build your systems?

    This took me a minute to wrap my head around, because I think that the topologies are the result of the definition of a given piece. Therefore not the terms in which the pieces are defined but more the result of that definition. That being said I think there are three basic topolgies that have arisen.

    1.) There is a centralized “conductor” that passes information directly to the others in the group. John Bischoff’s recent piece “Tesla Sync” is a good example of that. In this piece, he is distributing pulses throughout the group and we play these pulses. Very simple, but very effective.

    2.) This is a variant on the last, where this centralized control is passed around. Phil Stone’s piece “boss” comes to mind where any one person is the “boss” and can turn the others in the group up and down in amplitude while in control, but at any time some one else in the group can grab the “boss” position and take over.

    3.) Finally, is the more difficult “open network”. This is difficult because it depends on everyone working for it to function. A classic piece that exemplifies this is Tim Perkis’ “Wax Lips” In which you create a fixed remapping of an input to an output. If you get a C3 from John you will always send a Bb4 to Chris, for example. This means that there is a complex fixed rule set that any given note will follow. If everything is working then one note will bounce around like a puck from player to player being transposed in a fixed way at each spot, but it is not a perfect world and lets face it we are mostly composers and not programmers, so mistakes are made ;-) This would create what we began to call “holes” where a note would be remapped into a “black hole” not to return and the sound would stop. The initial solution was to “hose” the group with 100’s of notes and let the chaos insue. Eventually we all fixed our code and so we had to add a rule where we turned down the velocity (this was in the MIDI days) of the notes as they came in so they would die of attrition.

    So while the open network concept is inviting and interesting it is also the most difficult and time consuming to get working.

    if so, have some topologies worked better than others?

    The reality of these differences between piece types supports the argument that local network (LAN) based computer music network performance is a singular and rich genre. Each of these piece types has a way that it is, and therefore a quality that is more like a technique rather than a simple artifact of a single piece. That being said, I think that none works better than the other, but rather they work differently than each other.

    Be warned though, as I said, piece definitions that depend on everyone in the group being autonomous are much harder to guarantee that they will function without more work. People interpret the instructions differently or just make mistakes in their code and things get “messed up”.

    I would suggest trying the first “conductor” model as a beginning test mode just to make sure the network is working. That’s what we do when we first start up, I run my “noosphere” code which sends info out to everyone in the group and they can confirm that the network is working.

    I hope that was helpful,
    Scot


  7. Peter:

    very helpful. thanks for the great answers Scot!

    peter


  8. Adrian Freed:

    Scott was an important early adopter of OSC and his pioneering work has helped many people use it for netjamming.
    Thanks Scott!


  9. Michael Bussiere:

    More networked repertoire can be found on our project website at

    marsville.tv

    thanks for the informative posts


Leave a comment

Tags


livestage ~ music ~ sound ~ performance ~ calls + opps ~ installation ~ audio/visual ~ radio ~ instrument ~ festival ~ networked ~ audio ~ interactive ~ experimental ~ electronic ~ workshop ~ participatory ~ video ~ writings ~ event ~ mobile ~ exhibition ~ concert ~ live ~ collaboration ~ electroacoustic ~ reblog ~ nature ~ environment ~ distributed ~ soundscape ~ field recording ~ net_music_weekly ~ improvisation ~ software ~ history ~ locative media ~ space ~ noise ~ public ~ recording ~ voice ~ acoustic ~ immersion ~ sonification ~ lecture ~ generative ~ body ~ conference ~ tool ~ sound sculpture ~ net art ~ art + science ~ light ~ VJ/DJ ~ diy ~ site-specific ~ remix ~ perception ~ film ~ visualization ~ mapping ~ wearable ~ laptop ~ listening ~ city ~ urban ~ multimedia ~ architecture ~ data ~ algorithmic ~ game ~ open source ~ virtual ~ spatialization ~ robotic ~ biotechnology ~ platform ~ hacktivism ~ sound walk ~ webcast ~ score ~ image ~ electromagnetic ~ cinema ~ new media ~ ecology ~ composer ~ found ~ news ~ interface ~ telematic ~ sensor ~ circuit bending ~ dance ~ interviews/other ~ streaming ~ residency ~ synesthesia ~ physical ~ notation ~ political ~ intervention ~ object ~ broadcasts ~ conversation ~ controller ~ narrative ~ second life ~ mashup ~ responsive ~ social network ~ technology ~ ambient ~ place ~ hybrid ~ intermedia ~ augmented ~ motion tracking ~ symposium ~ spoken word ~ livecoding ~ text ~ phonography ~ gesture ~ aesthetics ~ auralization ~ upgrade! ~ acousmatic ~ resource ~ opera ~ mixed reality ~ theory ~ wireless device ~ processing ~ nmr_commission ~ 8bit ~ orchestra ~ toy ~ wireless network ~ theater ~ surveillance ~ web 2.0 ~ presentation ~ community ~ 3D ~ copyright ~ p2p ~ interview ~ podcast ~ soundtrack ~ psychogeography ~ research ~ social ~ feedback ~ sample ~ chance ~ tactile ~ recycle ~ interdisciplinary ~ emergence ~ code ~ presence ~ language ~ systems ~ privacy ~ cassette ~ free/libre software ~ newsletter ~ media ~ chiptune ~ play ~ avatar ~ haptics ~ archives ~ education ~ audio tour ~ place-specific ~ tactical ~ surround sound ~ activist ~ hardware ~ glitch ~ identity ~ asynchronous ~ business ~ tv ~ bioart ~ tangible ~ jazz ~ composition ~ e-literature ~ apps ~ transmission arts ~ animation ~ tag ~ synchronous ~ Artificial Intelligence ~ conductor ~ relational ~ collective ~ ubiquitous ~ microsound ~ reuse ~ convergence ~ simulation ~ synthesizers ~ im/material ~
3D ~ 8bit ~ acousmatic ~ acoustic ~ activist ~ aesthetics ~ Artificial Intelligence ~ algorithmic ~ ambient ~ animation ~ apps ~ architecture ~ archives ~ art + science ~ audio tour ~ augmented ~ auralization ~ audio/visual ~ avatar ~ bioart ~ biotechnology ~ body ~ broadcasts ~ business ~ calls + opps ~ cassette ~ chance ~ chiptune ~ circuit bending ~ city ~ code ~ collaboration ~ collective ~ community ~ composer ~ composition ~ concert ~ conductor ~ conference ~ controller ~ convergence ~ conversation ~ copyright ~ data ~ distributed ~ diy ~ e-literature ~ ecology ~ education ~ electroacoustic ~ electromagnetic ~ electronic ~ emergence ~ environment ~ event ~ exhibition ~ experimental ~ feedback ~ festival ~ field recording ~ p2p ~ film ~ found ~ free/libre software ~ game ~ generative ~ gesture ~ glitch ~ hacktivism ~ haptics ~ hardware ~ hybrid ~ identity ~ image ~ im/material ~ immersion ~ improvisation ~ instrument ~ interactive ~ interdisciplinary ~ interface ~ intermedia ~ intervention ~ interview ~ interviews/other ~ jazz ~ language ~ laptop ~ lecture ~ light ~ listening ~ cinema ~ livecoding ~ livestage ~ locative media ~ mapping ~ mashup ~ media ~ microsound ~ mixed reality ~ mobile ~ motion tracking ~ multimedia ~ nature ~ net_music_weekly ~ net art ~ networked ~ audio ~ dance ~ installation ~ live ~ music ~ narrative ~ radio ~ sound ~ text ~ theater ~ video ~ new media ~ news ~ newsletter ~ nmr_commission ~ noise ~ notation ~ object ~ open source ~ opera ~ orchestra ~ perception ~ performance ~ platform ~ tool ~ play ~ phonography ~ physical ~ place ~ place-specific ~ podcast ~ political ~ presence ~ presentation ~ privacy ~ processing ~ psychogeography ~ public ~ reblog ~ recording ~ recycle ~ relational ~ remix ~ research ~ residency ~ resource ~ responsive ~ reuse ~ robotic ~ sample ~ score ~ second life ~ sensor ~ simulation ~ site-specific ~ social ~ social network ~ software ~ sonification ~ sound sculpture ~ sound walk ~ soundscape ~ soundtrack ~ space ~ spatialization ~ spoken word ~ streaming ~ surround sound ~ surveillance ~ symposium ~ synchronous ~ synesthesia ~ synthesizers ~ systems ~ tactical ~ tag ~ tangible ~ telematic ~ history ~ participatory ~ technology ~ asynchronous ~ wireless network ~ theory ~ tactile ~ toy ~ transmission arts ~ tv ~ ubiquitous ~ upgrade! ~ urban ~ virtual ~ visualization ~ VJ/DJ ~ voice ~ wearable ~ web 2.0 ~ webcast ~ wireless device ~ workshop ~ writings ~

Archives

2014

Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan

2013

Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan

2012

Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan

2011

Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan

2010

Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan

2009

Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan

2008

Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan

2007

Dec | Nov | Oct | Sep | Aug | Jul | Jun | May | Apr

What is this?

Networked_Music_Review (NMR) is a research blog that focuses on emerging networked musical explorations.

Read more...

NMR Commissions

NMR commissioned the following artists to create new sound art works. More...
More NMR Commissions

Net_Music_Weekly

n-Polytope: Behaviors in Light and Sound after Iannis Xenakis [Gijón]

[Chris Salter working at his installation in the Exhibitions Gallery of the Art Centre. Photo:LABoral/Sergio Redruello] n-Polytope: Behaviors in Light and Sound after Iannis Xenakis ... Read more
Previous N_M_Weeklies

Bloggers

Guest Bloggers:

F.Y.I.

networked_performance
Turbulence
New York State Music Fund
Feed2Mobile
New American Radio
Upgrade! Boston
Networked: a (networked_book) about (networked_art)
New Radio and Performing Arts, Inc.
New York State Council on the Arts, a State agency
Massachusetts Cultural Council

Turbulence Works