T E C H N O T E S   O N   N I O
A N D   A U D I O   P R O G R A M M I N G
W I T H   D I R E C T O R  8




Jim Andrews

 

palindreme  

INTRO

I've enjoyed sites/works of Shockwave in which the author provides not only a good piece of work but also some tech notes on the Director programming in Lingo. That's what I want to do here: discuss some of the technical aspects of Nio and Vismu (visual music) more generally; consequently, this article is really for people interested in doing audio programming with Director.

I haven't encountered any current articles on Lingo audio; hence this one. The considerably expanded audio capabilities of Director 8 haven't been out for long; they should be discussed since they're a great tool for serious audio/multimedia work on the Web.

This article deals with operations performed on sounds in internal or external casts. Sounds from internal or external casts do not play until they are fully downloaded. When attempting to loop sounds or have near instantaneous access to sounds, internal or external cast members should always be used. Avoid using sounds that aren't members of internal or external casts for random playback purposes, because these sounds must be downloaded each time they are played.

Internal or external cast sounds can be compressed quite dramatically (you can set the compression ratio), and Vismu deals with fairly short sounds (2-12 seconds long). This means that the individual audio clips download very quickly. In fact, using Vismu is better than being able to cache non-Cast-member sound files. Even if it was possible to cache these files, you would also have to find a way to reliably stream multiple sounds at once, which would not be feasible using slow connection speeds. The streaming strategy implemented in Nio is pretty much optimal, given the form of Vismu.

QUEUING AND PLAYING SOUNDS

A very useful audio feature of Director 8 is the ability to queue sounds. This provides continuous audio between sound files. There are two main ways to play sounds in Lingo. The command

sound(8).play(member("mySound"))

plays the sound named mySound in channel 8 immediately with no audible delay. However, if you don't want mySound to play until myOtherSound has finished, and you do not wish to hear any audible gap between the two sounds, you use the command

sound(8).queue(member("mySound"))

to queue the sound (assuming myOtherSound is currently playing in channel 8) and then

sound(8).play()

to set the queue in motion. mySound will not play until you issue this command.

However, in some circumstances, queuing sounds takes about 1.2 seconds (on a 400 MHz PC), and this duration is primarily independent of the length of the sound (unless, perhaps, if the sound is shorter than 1.5 seconds—the default preloadTime). I think it takes about 1.2 seconds to queue a sound if there's a queued sound already playing in the channel. If there is no sound playing in the channel, it takes considerably less time to queue a sound (nearly instantaneous).

We can expect that this latency duration will shorten over future versions of Director and as computers become faster. However, its existence probably indicates that there is some necessary latency in this case. In other words, I think it unlikely that it is simply the result of sloppy programming on the part of the Director audio engineers. Why? Because if you are attempting to queue a sound while the queue is in motion, you can see you have a trickier engineering job than if the queue is not in motion, if you think about it. You don't want to stop the motion of the queue (ie stop the sound) while you're queuing the next one. But this is a problem for the Director audio engineers to contemplate, not we users of Director. My point is that the 1.2 seconds will shorten over time, but it will not go away altogether, so any algorithms you implement to deal with the problem will in the future be useful and should only need adjustment concerning the 1.2 second duration.

When a sound is queued, the whole thing is not queued. Just the preloadTime. The default preloadTime is 1.5 seconds. So it currently takes 1.2 seconds to queue 1.5 seconds of sound if the queue is in motion. That can probably be improved upon by the Director audio engineers considerably.

I've tried setting the preloadTime to a value shorter than the 1.5 second default, but it doesn't seem to help.

This latency is a nice little problem if you're building a highly interactive app involving continuous audio in which the user will definitely expect instant results when she takes an action to play this or that sound.

Nio deals with this problem rather gracefully.

Essentially the strategy is this: queue sounds well enough in advance (ie, at least 1.2 seconds before the sound ends) so that no audio gaps result. Queuing upon encountering a well-placed cuepoint will do the trick. But if the user takes an action while or after you are queuing sounds which should result in them hearing something other than what is in the queue, then immediately stop all the sounds, use the setPlayList command to empty the queue of sounds completely, then requeue the sounds correctly, and finally set the queue in motion using the sound(channel).play() command. You can do this with all eight channels in the blink of an eye. There is an audible interruption in the sound, however.

But if you are clever about it, you can requeue the sounds using the startTime parameter, which determines how far into the sound play commences, to almost eliminate the audio gap. In other words, before you requeue, fetch the currentTime of the sound, ie, where it stopped playing, and requeue the sound to start just after that time (not immediately after, but a bit longer, perhaps 212 ms, to end up with hardly any interruption in the music's rhythm).

This is the strategy Nio uses in both Verse One and Verse Two when the user takes an action that should result in some sound playing that hasn't been queued. The resulting performance compares acceptably with desktop sound-studio programs like CakeWalk.

 

Eye of Egg Word

CUEPOINTS

You can use SoundForge or SoundEdit 16 to embed cuepoints into sounds. You can also name the cuepoints with these tools. Director understands cuepoints (they're called "markers" in SoundForge) and can execute code when it passes a cuepoint. This is accomplished by writing an on cuepassed handler, which is documented in the Lingo Dictionary. You can also find out the name of the cuepoint, the channel it's playing in, and another parameter within the handler.

This ability to embed cuepoints into sounds and have Director understand when it passes a cuepoint is crucial to being able to write applications that are in part audio-driven and require precise timing and synchronization both between different sounds and between sound and visuals. By using cuepoints, visual and computational and sonic events in the app can be triggered in response to the state of the audio, rather than in response to what frame the movie is on or even when timers time out.

Timers are not as accurate as the audio stream, it seems, or at least they are not automatically forever in synch with the audio stream; nor is the frame rate of a movie particularly constant (because different frames require different amounts of processing) and, in any case, regardless of whether the audio stream flow is perfectly constant or not, you want to synch with it in audio-driven apps. You need direct contact with the state of the audio, and cuepoints provide you with that empirical, immediate feedback on the current state of the audio.

And they do so in a way that is not computationally intensive.

Direct contact with the current, immediate state of the audio almost needs the use of cuepoints when you queue rather than play sounds, because you queue a sound at time t and it doesn't start playing until some time later when it comes up in the queue, usually.

ALTERNATIVES TO CUEPOINTS

There are other more computationally intensive ways to be in direct, immediate touch with the state of the audio. For instance, sound(1).currentTime returns the number of milliseconds that have elapsed since the sound in channel 1 played from its startTime, which you can set to be either the beginning of the sound or some arbitrary time into it. And sound(1).elapsedTime returns how many milliseconds the sound has played in total, if it is looping, say. But it would be nice to avoid having to fetch this data as often as you need to in order to know very precisely when a new sound begins. Further, currentTime and elapsedTime are only updated four or five times a second, which is not particularly good resolution on which to base musically precise synchronization, though you can probably get away with it if you're using it to synch vis and audio (rather than audio and audio, which might require more precise timing, depending on what you are doing).

By the way, there are considerable data and methods handy concerning sounds in Director. There are more than 50 commands available in Director 8 to get and set sound data. Quite deluxe. My hat is off to Jonathan Powers and the other Sound Engineers working on Director. And I'd like to thank Jonathan for answering my early questions.

The two primary documents to consult are The Lingo Dictionary and Using Director Shockwave Studio available in PDF.

 

GETPLAYLIST()

You can also take advantage of the getPlayList command. For instance, sound(1).getPlayList() returns a list of the sounds currently queued to play in channel 1, not including the sound currently playing in channel 1, if there is one. So when sound(1).getPlayList().count (which is the number of sounds queued to play in channel 1) equals 0, the final sound in the queue of sounds for channel 1 is about to begin in something less than the preLoadTime associated with the sound (which is the duration of sound buffered before the sound begins playing). Sounds do not leave the queue when they begin playing; they leave the queue when they are buffered, which is slightly before they begin playing.

There is sufficiently much difference between the time when sound(channel).getPlayList().count = 0 and when a queued sound starts playing that you cannot cue on this to initiate events you mean to be synchronized with the start of a new sound (unless of course initiation of these events takes as long as it apparently takes to buffer queued sounds when something is already playing). But you can use the fact that sound(channel).getPlayList().count = 0 to initiate requeuing of sounds if the resulting performance is acceptable to you. Taking the cue from cuepoints is better if you do not wish to queue sounds until the last possible moment, which is the strategy in Verse Two, which involves not only arbitrary layering of sounds but also arbitrary sequencing of sounds. By 'arbitrary' I mean that at any point in time, any of the available sounds could be in any of the available layers or any of the available sequential positions.

Verse One of Nio uses sound(channel).getPlayList().count = 0 to initiate re-queuing. There is a script attached to the frame in which Verse One runs. Verse One stays in one frame after the initial streaming has been completed. In other words Verse One is a 'one frame movie' entirely controlled by Lingo after the streaming is finished (which extends over about sixteen frames). The first thing the frame script does is check to see whether sound(2).getPlayList().count = 0. If this is not so, the frame script does absolutely nothing.

I should also mention that the movie frame rate is set at 64 fps while the movie is in that frame and elsewhere in Verse One. In other words, this condition is checked rather frequently. The reason for this fast rate is because if the user clicks a sound icon that isn't queued, which should initiate play of the associated sound and animation, one would like the re-queuing to be done el pronto. And re-queuing is done in this frame script.

TIMERS IN NIO AND VIS/AUDIO SYNCH

There are several timers in Nio. One is used during the streaming process to minimize the number of times Nio checks (using frameready()) to see if the next frame has finished downloading, because that seems to suck juice and Nio has other things to do; it does a lot of sound processing and a lot of animation processing. Also, since it seems you can't currently put a cuepoint in a sound near the end of the sound or the beginning of a sound and expect it to work properly if you are repeatedly queuing sounds (bug!), I've had to insert a cuepoint about 1.3 seconds before the end of a sound (the sounds are all 4 seconds long), queue the next sounds to be played, and set a timer to go off in 1.3 seconds, because there is some processing to be done when a new sound starts, such as starting its associated animation.

Also, when a new sound starts, that sets up a timer that goes off at the end of each beat of music. At the end of each beat of music, Nio adjusts the frame rates of the animations currently playing so that the animations are synched with the audio.

The more animations playing at once, the slower each of them plays, if left to their own devices. I've found that changing the fixedRate property (basically the frame rate) of the imported Flash animations speeds them up. Adjusting the frame rate is preferable to adjusting the frame: if you change the frame the animation is on rather than the frame rate, the animation will end up looking jerky.

In Verse One of Nio, there are times when the animations speed up dramatically and aren't in synch with the sound. I could have fixed that, but I thought it looked neat. Verse Two is more constant in its synchronization of audio and visuals (but not quite as alive as Verse One, for my money).

TIMERS IN DIRECTOR

In the code that follows, blue words are "keywords" in the Lingo language, green ones are "custom" Lingo words, ie, also have special meaning to the Lingo Interpreter, red ones are comments, grey ones are "literals" (strings and numbers), and white ones are everything else. The code in Director is color-coded much like this.

Many thanks to Chuck Neal of mediamacros.com for helping me get timer objects figured out.

Here is how to create and use timers in Director.

I first create an object, ie, an instantiation of a parent script, usually before the movie starts in the 'on prepareMovie' handler.

gResetAnimations = new(script "ResetAnimationsScript", param1, param2)

gResetAnimations is a global variable. It's not the timer. It's just handy to have an object for each timer you create if you have parameters (param1, param2) associated with the workings of the timer specific to what the timer does in your app. There is a cast member in the movie, a parent script, named ResetAnimationsScript in the "cast" window, which contains the media elements of the movie. And then in the next line of the 'on prepareMovie' handler I create the timer:

dummy = timeOut("ResetAnimations").new(VOID, #ResetAnimations, gResetAnimations)

Let me explain the above line. The above line names the timer ResetAnimations. The 'period' of the timer, ie, the number of milliseconds it runs before timing out, is set to VOID because I don't want the timer to run just yet. You can assign a handler to a timer so that when the timer times out, the code in the handler runs. When ResetAnimations times out, it runs the ResetAnimations handler. And the associated object of the timer is gResetAnimations; the timer will look in that object first for the ResetAnimations handler, though the handler can just as well be in any movie script.

It should also be noted that there is a bug in Director 8, 8.5, and 8.5.1 (the most recent version as of this writing) which necessitates the dummy part in the above line of code. If you do not assign the timeOut object to a variable, Director does not dispose of the memory for the timeOut object correctly and a memory leak (and possibly worse) results. However, the above takes care of the problem. Thanks to Jakob Hede Madsen for pointing this out.

Now let's look at the ResetAnimationsScript. What this code does is reset animations back to their first frame if they are playing when the timer times out, which happens when new sounds begin playing.

property theParam1, theParam2

 

on new me, myParam1, myParam2

  me.theParam1 = myParam1

  me.theParam2 = myParam2

  -- This is the constructor of the object, ie, this code runs only when the object is created. theParam1 and theParam2 are not functional in the example code here but are included just to show how to pass and use such parameters when creating objects. This particular object has two pieces of data (theParam1, theParam2) and one method or handler (ResetAnimations).

  return me

end

 

 

on ResetAnimations me

  global gSoundIconsPlaying, gBeatLength, gFreezeSprite, gInVerse1

  if gInVerse1 then

  --if we are in Verse One of Nio

    if sprite(gFreezeSprite).up then

    --if the Start/Stop button is up

      repeat with Channel= 3 to 8

      --Director has 8 channels of stereo sound at once

        if gSoundIconsPlaying[Channel]<>0 then

        --if there is an animation playing in the Channel

          animToReset = sprite(gSoundIconsPlaying[Channel]).itsAnimation)

          --name the animation to be reset

          sprite(animToReset).frame=1

          --reset the animation to frame 1

          sprite(animToReset).play()

          --play the animation

        end if

      end repeat

      timeout("ResetAnimations").period = VOID

      --stop this timer (it will be restarted by the next cuepoint passed)

      timeout("AdjustAnimationSpeeds").period = gBeatLength

      --start the timer that goes off at the end of each beat

    end if

  end if

end

Note that the timer turns itself off with the line

timeout("ResetAnimations").period = VOID

and starts a different timer with the line

timeout("AdjustAnimationSpeeds").period = gBeatLength

The latter timer is the one that adjusts the frame rate of the animations at the end of each (except the last) beat of music for each sound.

In general, once you have created a timer, you start it, as in the last line of code, by assigning its period some number which determines how long it will run before timing out. If you do not set its period to VOID, then the timer will run again.

For more information on timer objects, check out Enigma n2, which uses them. The source code is available. Also, see Using Director Shockwave Studio. Additionally, consult the following terms in the Lingo Dictionary:

  • forget()
  • new
  • period
  • persistent
  • target
  • time (timeout object property)
  • timeout()
  • timeoutHandler
  • timeoutList

 

 

Poem in transition

 

LINKS TO USEFUL DIRECTOR-RELATED SITES

 

 


[ Nio Home [ Nio ]  [ About ]  [ Now (then) 1 ]   [ Source ]  [ Bio ]  [ Turbulence ]