WWDC2000 Session 175
Transcript
Kind: captions
Language: en
hello I'd like to thank you all for
coming this afternoon this is a very
exciting session to be involved in and
it's actually one of three sessions to
do with Mac os10 support for audio and
MIDI this session will cover both the
MIDI support that we're providing an
hourish 10 and it's the first time in a
number of years that Apple has made a
serious commitment in this area and as I
said it's very exciting to be involved
in this project there's two more
sessions on Friday there late in the
afternoon starting at three-thirty are
covering both the iokit area of audio
support audio device support and the
user application interface to audio
devices and encourage you to go to that
and then after those sessions on Friday
they'll actually be a party that will
give you some more details on later so
without any further ado I'd like to
introduce Doug white will be talking
about the media services on hours 10
thank
[Applause]
good afternoon I'm Doug wide I'm a
software engineer in the core audio
group and I'm here to tell you about the
new MIDI system services on Mac OS 10
before coming to apple iwork tadap code
systems for about 12 years working on
MIDI applications like many of you and I
was the author of all math there so
here's what I'm going to talk about
today I'm going to talk a little about
the history and the goals of our new
system services to the history of MIDI
on the Macintosh because that will help
you understand our goals I'll go over
some of the key concepts in the midi API
we'll look at some of the functions in
it I'll talk a little about the
performance challenges of getting MIDI
working on a multitasking modern
operating system like OS 10 and i'll
tell you about the availability of the
new midi system services so going back
to the mid-1980s the first macintosh
MIDI interfaces connected directly to
the serial hardware and developers would
write to the serial ports directly
because it was most efficient and I
don't think the serial drivers supported
external clock in those days and that
was fine and good for a while until the
late eighties we started to see
multi-port MIDI interfaces like Mark of
the unicorn Smitty timepiece we started
to see sound cards like digidesign smack
proteus from sample cell and they're
started that there became a need for
system software for applications to deal
with these different kinds of hardware
in a hardware independent way so in 1989
we saw apples MIDI manager which was the
first attempt to solve this problem
unfortunately MIDI manager had some
performance problems and it got kind of
unwieldy on when you tried to use it in
large studio environments opcodes nice
OMS which came out in 1990 to address
some of these limitations with MIDI
manager and in the course of a few years
it became a de facto standard
most most programs on the Macintosh not
supported on ls9 people making MIDI hard
work tend to write OMS drivers for their
hard work and it's the way it's be only
weighs a lot of these applications in
draw and hardware talking to each other
now but because OMS was controlled by a
competitor and now that it isn't being
supported anymore as far as I can tell
or it well it continues to work on OS 9
but it's not being developed any further
and eternally the prospects for it to
work law in the light stand are not good
so that combined with the fact that
developers have been sort of chipping
away at MIDI compatibility with the
problem of software since and so on we
see a need for Apple to provide a single
set of MIDI system services for OS 10 so
that's our main goal I know it's time is
to provide a single standard so that
everyone's hard work and software can
play together nicely again we so too in
support of that goal we want to focus
just on those basic midi i/o services
and doing those basic video services
with highly accurate timing meaning low
latencies and low jitter also we want to
make the midi services open source so
that you as developers can see what
we're doing help us fix things if we're
not doing them right and not have to be
afraid of repeats of the fates of midi
manager and OMS so looking at the midi
system services in the big picture of
the rest of the OS they are layered
above the eye out yet in the colonel I
oak it is where we see drivers for
talking to hardware we have a concept of
many drivers applications can talk to
the MIDI services which in turn clocks
committee drivers for higher level
applications of MIDI like just playing
MIDI files you can use the quicktime
music device api's and Chris Rogers will
be calling you a bit about that in the
second half
discussion so here are the main pieces
of the new MIDI services we have a
driver model for many drivers
applications can share access to
hardware meaning that multiple MIDI
applications can send simultaneously to
the same device and a MIDI device can
send MIDI into the computer and multiple
applications can all receive it we
timestamp all the MIDI input we
scheduled all MIDI output in advance for
applications we want to do that and all
that scheduling is done using the most
accurate timing hardware on the computer
the host clock as returned by uptime the
MIDI services provide a small central
repository of information about the MIDI
hardware that's present they don't try
to replicate the full functionality of
the UC and OMS and free midi where the
user you know with 15 synthesizers can
enter information about all that when I
say there's a repository of device
information that's just about the actual
dinner MIDI interfaces and cards for
president we don't at least for now
become concerned about the devices that
are attached externally to MIDI
interfaces and the MIDI services provide
some basic inter-process communication
so if you have a small midi utility
maybe a transpose azure arpeggiate sore
something you can you can use the MIDI
services to create an application like
that so now I'd like to get into some of
the objects and functions in the midi
API first there's the MIDI client and
for those of you who've used midi
manager in lms this is a familiar
concept the first thing you typically do
is create a MIDI client object and
that's done with MIDI client create
after creating your clients then you can
create midi port objects this is also
similar to many manager in lms reports
or objects through which your programs
can send and receive midi messages
and they're created at startup so here
we see midi input port creates and many
output port create which create an input
and output ports obviously the my read
proc parameter is passed to my MIDI
input port create is a procedure through
which that port will let me rephrase
that but that procedure will get called
when MIDI comes into your program
through that port all the MIDI io
functions use the structure MIDI packet
list when sending and receiving many
it's simply a list of MIDI packet
structure can be any length for now and
those MIDI package structures are
themselves variable length structures
maybe packet contains one or more
simultaneous MIDI events with one
exception if you have a system exclusive
message it has to be in its own MIDI
packet and the reason for that is just
to make our own internal parts inc
simpler and for similar reasons running
status is not allowed inside a MIDI
packet but otherwise comes the data
array portion of the MIDI packet is a
little mini stream directed at one
device and many packets are time-stamped
and I'll talk a bit about that later so
dealing with variable like structures
like MIDI packet and MIDI packet list
can be a little annoying so we provided
a few simple helper functions there's
next committee packet which you can use
when dealing with the MIDI packet list
you can get a pointer to the first
packet in the packet list then use next
midi packets to advance to the next one
and so on when sending MIDI you can use
MIDI packet list in it and MIDI packet
list add to dynamically build up a MIDI
packet list and here's an example of
using them here I'm creating a buffer of
1k bites on the stack I'm casting it to
MIDI packet list
I call MIDI packet listen it on it to
initialize it and then I'm adding a
simple MIDI note on event to it with
MIDI Tackett let's add that this for
more complex examples and I could go
adding more events we played at
different time and build up a MIDI
packet list when MIDI packet list add
returns null then I know that it's
become full and it's time to send it and
I can call me a packet list in it and
start calling maybe tag and looks bad on
it again okay now that we've looked at
the structures representing MIDI data
itself here's a subject that represent
MIDI sources and destinations in the
system Wow the lowest level object is a
MIDI endpoint which is a simple MIDI
source or destination and a single 16
channel midi stream so your program can
have one simple view of the system has
an array of sources and destinations and
here's some example of here is an
example of finding all the destinations
in the system and sending a MIDI packet
see each one midi get number of
destinations and MIDI get destination
are used to walk through all the
destinations and then MIDI send takes as
its first argument an output port which
you created it start up the destination
which was just returned from MIDI get
destination and then a MIDI packet list
so in this example we're sending the
same MIDI packet list to all the
destinations similarly here's an example
of how to find and receive all the MIDI
sources in the system find an open input
connections to all the MIDI sources in
the system we call mitigate number of
sources and MIDI gets forced to walk
through all the sources and then we call
midi port connect source to establish a
connection from that midi source to your
programs input port now the reason we
asked you to create such connections
explicitly and this concept is familiar
to those of you who've used
math it says that if we've got a bunch
of MIDI sources sending stuff into the
computer there's we only incur the
overhead of delivering that midi to your
application when it's from a source that
your application cares without listening
to and if you remember when we created
that input port at the beginning of the
program we passed my read proc and it
gets called when that midi comes in a
note about the read proc the MIDI
library creates a thread on your
programs behalf to receive that data so
similarly to the way on Mac OS 9 for
those of you who've done many
programming there before your MIDI would
come in at interrupt level and you'd
have to be careful about critical
regions and not accessing memory I know
it's 10 you need to be aware that your
read proc is called from a separate
thread and you may have synchronization
issues with any data you access from
that thread and also be aware it's a
high priority thread don't do too much
work there okay so those are many
sources and destinations the next higher
level object in the midi api is midi
entity which is a logical sub component
of a device which groups together some
number of endpoints for example you
might have a USB device which has a
general midi synthesizer in it and a
pair of MIDI ports that device can be
thought of as having two entities
synthesizer and the pair of MIDI Jack's
an eight port MIDI interface with eight
ends and eight outs might be thought of
as having a tentative each of them with
a source and a destination that point
and the reason we have this concept of
an entity is so that if your program
wants to communicate in a bi-directional
manner with some piece of hardware out
there you have a way of associating the
sources and destinations you know which
wich one constitute a pair
the next level up in the middie API is a
MIDI device which represents an actual
physical device like a MIDI interface a
card something that's on fire wire it's
something that's controlled by a driver
the driver for that device will have
located it and registered it with the
system and so this diagram here
illustrates how many devices contain
excuse which contain endpoints and so
here's a quick look at the functions you
would use to walk through the system and
locate the devices and entities that are
present midi get number of devices and
midi get device will inter iterate
through the devices and mitigate number
of entities and maybe get entity will
walk through the entities that are
associated with the device okay now
we've looked at devices entities and end
points there's a set of calls to find
out information about those devices
entities and endpoints and we call these
attributes properties the property
system is extensible meaning that anyone
can make up a property to attach to
their device but we've designed a few
simple ones like its name its
manufacturer name its model number the
MIDI channels that it's listening on it
if someone knows that so the system is
extensible properties can be inherited
which means that if you ask for example
an endpoint what is your manufacturer
name we'll probably end up with the
devices manufacturer name because the
driver rider will probably just that
here's my device I'm the ABC corporation
and here's my d4 device and you know
he's just attached that to the device
but the entity will inherit that
property from the device and as I said
mom for now properties are most likely
only going to be set by drivers here's a
simple example of obtaining a property
of an object we use the MIDI object get
string property call Pat
the constant K MIDI property name and we
get back a CF string which is the name
we can convert it to a C string print it
and then release the cs string rep that
we got back CF string wrapped this part
of core foundation which you can read
about in our documentation okay is the
highest level object in MIDI API is the
MIDI setup which represents a saved or
savable state of the system it's
essentially just a list of the devices
the drivers located the MIDI interfaces
and cards that are present but we do
have facilities there for keeping track
of some other details like which drivers
device on the serial port which device
does the user prefer for playing back
general MIDI files or whatever here are
some examples of how your program can
manipulate midi setups those of you
who've used on that might be afraid of
the term setup because there was that
own a setup program that not everyone
liked but we don't have any user
interface involved here although at
worse I could envision a dialog where
the user has to authorize serial ports
to be sort of searched but in any case I
don't think we're going to have any any
user interface here MIDI setup create
simply tells the system to go
interrogate all the drivers find out
what hardware is present make a MIDI
setup containing all those devices and
return it then you would almost always
call MIDI setup install after calling
MIDI setup create which just tells the
system here's the MIDI setup make that
the current state until someone else
tells you otherwise MIDI setup get
current returns a reference to the
current MIDI setup and then there's MIDI
setup to data and MIDI setup from data
which allow you to convert a MIDI setup
object to and from a textual
representation which is an XML and can
be saved to a file
okay so that's the tour of some of the
objects to which you send and receive
many now I'd like to talk about some of
the issues of timing when sending MIDI
and receiving it your programs will
probably want to schedule MIDI output a
little bit in advance and you can do
that by using timestamps the timestamps
and the MIDI packets we looked at
earlier use the host clock time as
returned by up time we suggest you don't
schedule events too far in advance if
you schedule a whole five minute MIDI
file to be played it'll play and you
won't have any way to tell it to stop
lets the user quits your program
probably so we suggest that you use a
number say 100 milliseconds or so as a
guideline of how far in advance to
schedule that's a short enough period of
time to be relatively responsive as user
says stop but it's also far enough in
advance so that if you're talking to a
piece of hardware that's got some
latency and talking to it we can still a
timing accuracy and sending events for
that device an important thing you're
about whoops pushing buttons here an
important thing about scheduling MIDI
output is that we have support for
devices for pieces of hardware that are
capable of doing their own scheduling
and sending of MIDI a driver may attach
a property to its device that says I
want to schedule event for this device
some number of milliseconds in advance
so you as an application rider should
check that property and see if the
driver rider has has attached that
property to the device and respect it so
if the driver is saying i want my maybe
five milliseconds in advance please then
you as an application writer can get the
best timing from the system by making
sure that you're milking your MIDI
events get sent five milliseconds in
advance or more
a few timing issues on incoming midi we
time stamp it with the host clock as
soon as possible and to schedule your
own tasks there's a lot of different
ways to do this in Mac OS 10 there's a
number of AP is that we're recommending
that you use the calls and
multiprocessing that H which is part of
carbon for those of you who are writing
MIDI drivers for your own many hardware
[Music]
many drivers are packaged as CF plugins
which are a little bit intimidating at
first glint at first glimpse when I
first looked at it I said this looks
like calm I'm scared but it's not that
bad we have some example drivers you can
build on and it makes it pretty easy
usually you won't need a kernel
extension and this is true if you're
writing a driver for a USB firewire or
serial midi driver if you're writing a
pci cards driver then you will need a
kernel extension but usually like four
USB in my example drivers on just a USB
user client and these terms are familiar
to those of using the i/o kids sessions
and for those of you writing drivers I
recommend you go find out more about IO
kit if you haven't already the driver
programming interface from the MIDI side
of things is pretty simple there's just
a few calls to implement there's one to
locate your hardware their calls to
start and stop communicating with your
hardware there's a call to send some
MIDI events to your hardware and when
you receive incoming MIDI events and
there's a way you can call back into the
MIDI system to have those many events
delivered I just wanted to make a quick
note here about how having looked at the
source code for USB drivers on OS 9 and
written one on ls10 it's an order of
magnitude easier least I thought so I
know I stand to write USB driver and
that's very encouraging
okay here's a diagram that shows the
pieces of the committee implementation
at the top we see your client
applications in green we supply a MIDI
framework which is a client-side dynamic
library that library communicates with
the MIDI server process and the reason
we have that server processes so that
incoming MIDI from some piece of
hardware can be efficiently distributed
to multiple applications below the MIDI
server we see that it loads and controls
the MIDI driver plugins and those driver
plugins communicate with IO kit now
you'll notice in the diagram the
horizontal gray lines both indicate
address space boundaries for different
processes so we have the colonel and the
midi server and your client applications
address spaces which brings us to one of
the main performance issues in dealing
with committee which is moving data
between different protected address
spaces now we've got some pretty good
and fast mechanisms for doing it but
nonetheless it's still important to be
aware of that and there are some things
you can do in your program to squeeze
extra performance out of the system when
possible do schedule your output a few
milliseconds in advance and look at that
property of the driver to see if it
wants to get data a little bit ahead of
time and this will especially enable you
to send multiple MIDI events that happen
close together in time with a single
call to MIDI send so instead of sending
one MIDI event at the time if you
package up just even the few
milliseconds of data at a time was
called committee send that will help the
system be a bit more efficient and we do
that i should mention i just wanted to
say we have a tendency on the core OS of
scheduling mechanism and things are good
there but they're getting better okay to
show you that midi is actually up and
running i know it's been to some extent
i've got a demo setup here
at the bottom you see I've got a MIDI
keyboard and sound module I've got a
MIDI interface that blue thing and that
connects to the computer by yo speak and
over on the right there we see the
various layers of software through which
midi messages travel when i run these
programs okay first I've got a simple
program which just plays a series of
MIDI notes at very regular intervals
using the scheduler built into the MIDI
server and hopefully we'll hear that
they're very nice and regular a little
latter place that's pretty good in
regular founding I think I've got
another program here here I've just got
a keyboard playing its own internal
sounds and now i'm going to play i'm
going to run a program that'll take the
midi from the keyboard send it to the
USB interface to the computer through
the whole stack of software back down to
the interface and after the sound module
so when i play the keyboard along when i
slide the keyboard i should be hearing
it sound as long as as well as one in
the sound module and we shouldn't be
hearing any delays or variations in that
delay
[Music]
that sounds pretty good too i think i
don't think the ladies are excessive or
anything I've got a midi file i really
like here i'm going to play a little bit
of it
[Music]
so it seemed pretty pretty been Smitty
big-play got one more little mini pilot
like to play for you this time i also
i'm going to play along with a little
bit using mid-eastern midi through the
computer that is I'm not showing you
what I'm actually doing here because
it's ugly and just learning terminal
based programs on Austin seems like to
remember what the type
[Music]
I hope that shows is that video is real
on Mac OS 10 so the next thing you're
probably wondering is how can I start to
make my applications work with MIDI on
OS 10 the MIDI services are not part of
developer preview for their just they've
just been coming together in the last
couple of weeks but we are just about
ready to start speeding so please write
to Dan Brown who's here in the front row
and we've given you an easy-to-remember
email address audio at apple.com we are
still holding out the possibility of
tweaking the api's a little bit based on
your feedback and our own release
process but we're basically in a mode of
optimizing stabilizing and getting ready
to release it as part of the mac OS x
public beta this summer and i want to
remind you that it is open source so I
hope I've given you get in good
introduction to MIDI on Mac OS 10 and
I'm really looking forward to seeing
your applications to use it thank
[Applause]
thanks Doug I'd like to bring Chris
Rogers out now Chris has been working
with Apple for about a year and has been
doing quite a lot of work on the
quicktime music architecture and is also
going to be discussing some of the
high-level audio services that we're
providing to applications developers in
general good afternoon as bill said my
name is chris rogers and i'm happy to be
here to talk to you today about music
services available on ls10 the topic
that we'll be covering today our new
synthesizer replacing the current
quicktime music architecture synthesizer
it's a dlf software synthesizer and
we'll be discussing that in some detail
we'll talk about the audio unit and
music device component architecture how
to actually hook these guys together in
different configurations and what that
means will become clear later on the
sequencing services and downloadable
sounds toolbox so let's get on with it
ok so where do music services fit in
with the rest of the core audio system
the music services are higher level
services that's it both on top of the
the MIDI server the rest of the MIDI
services that Doug has presented and
also the audio i/o devices and Jeff more
will be discussing this in great detail
in a later talk on friday i really
encourage you to go to that that would
be the core audio multi-channel and
beyond presentation at two o'clock on
friday and both of those systems
actually sit on top of i/o kit and you
may be interested in how to implement
audio drivers
and there will be a talk also on Friday
about audio family iokit drivers so the
music services are available to all
clients and also the MIDI server and the
audio i/o devices are directly
accessible by the client so depending on
the level of access that you require
very low level control you may just want
to go right down to the MIDI server and
I o devices or higher level control you
can talk to the MIDI services we're
dedicated to supporting open standard
that includes MIDI course standard MIDI
files are mid files are midi files are
standard MIDI files with a dls section
in the file pls dance or downloadable
sounds and that's a sample bank format
where people can include their own
custom samples sound effects
high-quality sample 16-bit stereo that
you want if you want and also we're
incorporating some of the ideas included
in mpeg-4 structured audio we're not
going as far as implementing fail or
anything like that for those who know
what mpeg-4 is about but we've
incorporated some of the better ideas
and mpeg-4 the dls soft sent this is a
synthesizer software synthesizer that's
been completely rewritten from scratch
to replace the synthesizer currently in
QT ma it's among other things it's got a
much better reverb several different
types basically just sound smoother and
what's even better is the reverb isn't
hard coded into the sense it's it's
implemented in a modular way so that
third parties can conflict in their own
reverb and other effects we'll see how
that works in a little bit what else is
in this off chance has much tighter
scheduling of notes so that you don't
get this kind of slop that you might
have seen on other synthesizers
scheduling it sample accurate it is a
downloadable sound sense and it allows
for easy importation of high-quality
third-party sample banks so don't
necessarily have to be stuck with cheap
8-bit sound set you can load in not only
general MIDI sound sets but arbitrary
sound sample banks for your own customs
music it's very general sample bass
synthesizer and below pure exponential
as per the downloadable sounds
specification there's a two-pole
resident filter in their unlimited key
ranges velocity ranges and layers the
layers let you actually stack multiple
samples when you hit the same key and
you can have individual panning and
modulation parameters on each one so you
can get nice fat rich pads that way and
also dls provides for much more flexible
modulation routing possibilities in the
old cutie ma sent okay now we're going
to talk about audio units now what is an
audio unit we're going to see in the
next couple slides what that really
means at its most abstract level it's
kind of a box that deals with audio in
some way it would take in n audio
streams and output m audio streams and
the number of inputs and outputs can be
variable in fact you may have an audio
unit that has no input or no output and
in at least one case there may be an
audio unit that has no input and and no
output and you may think what would that
be and that would be maybe an audio unit
that represents an external MIDI device
and we'll see how that can wrap up a
MIDI endpoint through the MIDI many
services that Doug Doug spoke about
earlier there are other types of audio
units that have have no inputs a an
audio unit is representing a hardware
input device would only have output and
vice versa hardware output device would
have only
put and PSP processors would typically
have both inputs and outputs for
processing audio some examples of DSP
processing modules would be reverb
chorus delay ring modulator parametric
EQ put a stereo mixer and there it's not
really probably didn't really belong in
that category but stereo mixer would be
an audio unit that takes in multiple
inputs and mixes according to volume and
pan information to to stereo output
another type of audio unit would be
format converters for sample rate
conversion bit depth conversion this
type of thing and also codecs like mp3
code coders and decoders another type of
audio unit is one which at high level
abstract the notion of a hardware input
device that would be layered on top of
the audio i/o device aight guys that
Jeff will be talking about on Friday
another audio source would be a software
synthesizer and this is an audio unit
which is called a music device which
supports some additional API is over the
audio unit an audio destination that's
another type of audio unit and that
could at a high level abstract the
notion of a hardware output device and
that would be implemented in terms of
the low level audio i/o api's and also a
file you could have an audio unit which
just writes its output directly to a
file so this just gives you kind of a
flavor of what the range of behavior
that an audio unit can can exhibit ok so
these individual audio units they're the
kind of interesting on their own but
they get even more interesting when
you're able to hook them together an
arbitrary configurations it was our goal
to have an architecture that lets
developers connects these modules up in
arbitrary ways not to
in linear chains like the current sound
manager can do and not just some kind of
a monolithic mixer architecture with
fixed send returns but we were going for
a fully modular approach where these
audio units can be connected in and
pretty sophisticated ways to create all
kinds of different interesting high
level high level software the
connections between these these audio
units is represented by an au graph
object and there's a whole set of api's
we're dealing with a you graph I'm not
going to go over too many specific api's
and in my top because there are so many
of them and I'm covering so much
different so many different topics
actually put the a you graph it in
essence it represents a set of audio
units and their connections like I said
there's a simple API to create and
connect them together and api's for
actually persisting the state of the
graph so that you could save the state
to a file or from memory and then
reconstruct the graph based on that
let's look a little bit at the client
API of the audio unit are you going to
have properties and you can get at those
properties with are you going to get
properties that property and has
property properties are keyed by ID and
an idea is really just an integer that's
all it is some of our IDs are predefined
and others can be defined by particular
implementers of audio units or third
parties can define their own custom
properties some examples of properties
would be a name number of inputs so the
client if the client is interested in
how many inputs and what kind of data
format these inputs take the client
would call get property with the
appropriate ID data is pacified void
star and length so arbitrary data could
be passed back and forth between the
client and audio unit and third parties
can pass custom data back and forth in
this way a real time parameters often
it's interesting at a real-time system
to to make parameter changes in real
time where these parameter changes occur
at very specific times so the other unit
get parameter and set parameter are the
api's that are used for this the
parameters are also key by ID and our
32-bit floating point values unlike the
MIDI continuous controllers which are
limited to a resolution of 7 bits which
is positioned for some things but very
insufficient for other types of
parameters like like pitch and filter
frequency are our values are 32-bit
floating point we feel that's resolute
enough to cover almost any need the set
parameter call includes a timestamp for
when that parameter change should take
effect so you so that client is able to
very accurately schedule very precise
parameter changes some examples of
parameters are channel gain and pan for
a stereo mixer filter cutoff frequency
or for that matter residence in a
low-pass filter delay time for a course
delay effect there are many others one
of the most important things you want to
do with these other units is actually
get access to be rendered the rendered
audio coming out of one of the audio
output streams and the audio unit render
call is used for this the client passes
in time stamping information for when
the audio buffer is to be presented in
the audio stream the audio is also
rendered for a specific output so if not
unit has say for different outputs audio
unit render would be called four times
once for each output
and in the internal implementation of an
audio unit in order to do signal
processing like come say a filter
low-pass filter how does the audio unit
actually read it input in order to do
the processing and then pass the result
back to the client who calls audio unit
render well the audio unit actually
reads its input by calling audio unit
render on another audio unit which
provides its input and audio unit knows
which one that is which is its source
because the connection has been
established for a head of prime okay let
me talk about the two-phase scheduled
render model on the bottom of this
diagram you'll see there's a timeline
time is progressing from left to right
and this is representing an audio stream
for a particular output of an audio unit
and we see that the audio stream is
divided into Little Miss diagrams
there's three different time slices but
conceptually can imagine an audio stream
being divided up into many different
small time slices for which processing
occurs so first of all for each time
slice events are scheduled time very
specific time stamping information is
provided for these events and secondly
the audio is rendered for each of the
outputs so for instance in the first
phase if this is a software synthesizer
all note events which apply for this
given time slice are scheduled and then
the audio is rendered the music device
is actually an audio unit which extends
upon the audio unit api's with
additional AP is that are specific to
synthesis and the music device also
replaces the note allocator and music
component that currently exists in
quicktime music architecture what kind
of additional api's does the music
device support mainly the api's center
around scheduling notes when notes start
one note stop the first protocol that's
used is just that the MIDI protocol
which everybody's familiar with and all
music devices would be expected to
support this protocol the second
protocol is an extended protocol which
allows for variable argument note
instantiation what does that mean really
in the MIDI protocol there there's only
two bits of information which are
provided for a note on event that is a
note number and a velocity so which note
on the keyboard is and how hard did you
hit it but that may be insufficient for
certain types of more complex
instruments and there may be certain
interesting applications where more
information could be provided for a note
instantiation for instance where to
position a note in 3d space so
additional information can be provided
in this variable argument note
instantiation another example would be
in a physical modeling synthesizer of a
drum it's for anybody who's actually
played a hand drum they know how subtle
changes in the position and how hard you
hit it and how flat your palm is or if
you hit it with the tip of your fingers
it makes very subtle changes in the
resonances that come out of the drum
that sounds completely changes in the
character of the tone so this type of
information could be passed in the
variable argument no substantiation this
extended protocol also supports more
than 16 midi channels and more than 128
controller messages music sequencing
services we're moving on to a different
topic here this represents a whole
nother set of api's which once again I
not able to go through in detail because
there are so many of them and I don't
have a limited time to talk but
essentially this is a set of a GIS for
constructing and editing multitrack
sequences whether they're MIDI sequences
or
using this extended protocol there's
also a run time for the real-time
playback of these sequences and that's
otherwise known as a sequencer the
events themselves can be like I said
MIDI events they can be in the extended
format and that can also be user events
which have user defined data in them and
that's up to the developer to decide how
to use the events actually address Audio
Units and music devices and external
MIDI gear through MIDI endpoints
indirectly through a music device
encapsulation so what can we do with
these sequences I've already shown
previously a slide where there are these
Audio Units connected together in
arbitrary configurations and here we
have three Audio Units so you know we
have one and two feeding into the third
one and now to the side here we see a
sequence that has three tracks and
events from track one are addressing
audio unit one track tues addressing
number two and track freeze addressing
audio unit number three the yellow the
sick yellow arrows represent the flow of
audio through the system and these blue
lines extend blue lines represent
control information scheduling
information being supplied to the audio
units if you remember back to the render
schedule diagram that I had a few slides
earlier you'll see that the sequence is
actually providing the schedule part of
this and the render part is actually
being pulled through by the audio units
themselves here are some features that
the sequencing service would slide this
basic cut copy paste merger place as you
would expect in a in a sequence or
application once again this does not
supply any user interface this is just
the low-level engine which will perform
this editing so you just slap a UI on
top of this and you're ready to go
track oh yeah also these edits can be
done live while the sequence is playing
so there's no difficulty there each
track in the sequence can have
attributes mute solo and looping
attributes so you can have a track which
is actually looping over and over again
on the same event and the loop time is
of course completely configurable the
sequencing services could be used as a
core sequencing engine for a sequencing
application so one of the most difficult
things in a sequencing a sequencing
application is to write this sequencing
engine not that writing user interface
code is easy but at least this much work
is done so this is an opportunity for
developers to leverage our our
technology here the scheduling uses
units of beets in floating-point format
there's an implicit temple map in the
sequence and the sequence formatted
persistent can be saved either to
standard MIDI file format or a new data
format in QuickTime which we're in
process of defining and we welcome your
input there as well okay now I'm going
to show you a demonstration using the
sequencing services it's actually a
simple little C program I wrote it's
just one or two pages long and it's just
basically calling into these sequencing
ati's it's not meant to be a musical
composition just kind of a basic
run-through of of what the sequencer can
do what we going to here is a cycling
through of general midi percussion and
after a while you'll hear a resonant
filter come in with the filter sweeping
through back and forth and on top of
that you'll hear
an AM module amplitude modulator and I
implemented this by actually connecting
the dls soft scent which is an audio
unit to a resonant filter and following
the resident filter I put an amplitude
modulator audio unit and then I created
a sequence that actually addresses are
ammeter changes in real time sweeping
the frequency of the filter and changing
the modulation of the amplitude
modulator I realized its kind of echo we
in this place so please bear with me
hope you can get the gist of what I'm
doing
[Music]
once again this is really nice
this this is just a simple simple
example it's C program one or two pages
of code something I slapped together
pretty quick didn't have any user
interface available to me to author
anything more interesting but you can
imagine if you had more complicated
setups of audio units representing a
number of different kinds of some
processing and it's reverbs and delays
and so on you could get quite a lot more
interesting set of effect so we got to
kind of wait until we have more of a
library of audio units built up let's
see where did I leave my little remote
okay now we're going to move on to
talking about the downloadable sounds
toolbox once again this represents a
whole set of api's which I don't have
time to go over individually but I can
talk about at a broad level downloadable
sounds is both a sample bank data format
and it's a sample based sense model the
tool box provides for reading and
writing VLS level two files and creating
arbitrary VLS instruments it could be
used as a foundation for a really nice
custom instrument editor application so
that users can drag in their own samples
apply envelopes and LFOs and panning and
layering and so on so this is a really
good opportunity for third parties this
format also replaces quicktime music
architecture atomic instruments format
at the top level the dls toolbox uses a
number of objects in its api's the dls
collection is at the top the collection
references the number of instruments
instrument representative reference
number of regions and the collection
also references the wave data as VLS
wave objects a dls collection contains a
set of instruments as I said it
references the wave data and it also
includes text based information could
include copyright information the name
of the collection author comment any
kind of tagged text that user wants to
put in there an instrument is assigned
to a particular MIDI bank and program
number and contains a set of regions and
also some articulation parameters low
frequency oscillators envelopes etc and
also contains text information like the
collection does dls region actually
references the sample data that's going
to be played and contains a loop points
and defines where in the key range we're
on the keyboard the sample will play and
in what velocity range and like I said
these regions can be stacked for
layering and also these regions can
contain articulation information like
envelopes and Ella phones which would
override those found in the instrument
and text information the dls wave object
contains the actual sample data and the
sample format for that data and the DLS
articulations object that's what
actually contains the LFOs that envelope
the reverb sends level and all the other
modulation information panning and these
objects can be attached to the DLS
regions or the dls instruments as we saw
and there's a simple set of api's for
accessing and setting the relevant
information in each one of these objects
and connecting them together and
it's really a lot easier to use this API
than to try to create a dls collection
by doing a low-level bite lunch and
leave me backwards compatibility I want
to say that we are supporting the old
quicktime music architecture components
they have been reimplemented the note
allocator and the music component the
old software sense have been
reimplemented on top of all of this new
technology but we are deprecating the
api's for these components they continue
to work but we are really encouraging
developers to move over to the new audio
unit music device api's and the
sequencing service API so I would want
to wrap up my my presentation here and I
guess we can move on to Q&A session with
Bill thanks for giving me time and hope
you all find a use for music services
just before we actually get started on
the Q&A I'd like to say that the
synthesizer that's been quick time on
the OS 10 discs that you have on dp4 is
actually the new synthesizer we haven't
publicized the api's yet us were still
actually going to review stages for
those eight guys but the sample sets us
on the DP for CD is that an a big sample
said it's the same sample sympathy I
can't give you my engine users but the
actual synthesize the synthesis engine
itself is the new music device
components with chris has been talking
about today and you know that will be
available seceding a little bit later as
as dark discussed in his talk as well if
you send email to audio at AFRICOM
you'll be able to get access to setting
information will be setting up setting
with and you must feel
you