WWDC2004 Session 205

Transcript

Kind: captions
Language: en
good afternoon everyone welcome to
session 205 auto high audio hardware and
MIDI please welcome core audio MIDI
plumber Doug Wyatt thank you good
afternoon this session is in three parts
in the first part i'll be going over a
few things about the core MIDI framework
and the second part Jeff Moore will
cover what's new and tiger for the core
audio framework and the last portion of
this session Nick Thompson will be
covering some audio hardware and driver
issues so for the core of MIDI portion
of this talk I'll be covering very
briefly some basics of core MIDI there's
some best practices issues I'd like to
cover and there's a new API for tiger
called the karate o clock so most of you
who are working with many already know
what it's about but for those of you who
don't good resources the many
manufacturers association at MMA org on
the net and there's some good books out
there about MIDI our api's are
documented in the headers and there's
some documentation and examples in the
developer directory we've got a very
active mailing list and there's this is
probably about the 5th wdfw DC that have
been talking about core media and
there's dvds of the previous year's
sessions so in the area of best
practices with the existing core midi
api so there's just two things i want to
talk about one falls from the area of
user experience and the other is
performance issue with timing accuracy
so the first thing that I've seen in
some applications is difficulty and
giving the user a good experience with
seeing the names of his devices
and here we've got a moderate-sized
Studios a bunch of devices as seen in
the audio MIDI setup application and
there's devices connected to about five
of the eight ports there and what I've
seen is that in on what some
applications they'll just show me the
name of the MIDI port where the user
would really like to see the name of the
external device I realized after I made
the slide this is sort of a confusing
example because there's actually two
different devices on 43 so for output
I'd want to see a repeater for input I
would want to see radium the names of
the external devices I just want to
quickly show you a little application
that illustrates a couple of approaches
to displaying MIDI endpoints in your
application so here you might want to in
the case of multiple devices on 14 and
actually this setup that i have here
doesn't have any examples of it but here
we see the names of the port where there
aren't external devices and then we see
the names of the external devices where
there are some these menus don't have
the names of the external ports of them
this may be a little draconian it forces
your users to go through audio MIDI
setup some applications do that it's
okay and this also illustrates that you
can go through and obtain pairs of ports
so here we're only seeing the devices
which I have a two-way connection to
notice here that I have the radium as a
source the repeater is the destination
but it doesn't appear here well it's
probably bug but so the idea is to only
show your the devices to which there is
a tool a connection okay back to the
slides please
so what this quick and dirty program
does using some sample code from our SDK
but just to go over the process you can
iterate through the sources and
destinations in the system with MIDI gut
source and get destination once you've
found a source or destination endpoint
you can find out what's connected to it
with the property connection unique ID
then you can go find that object that is
connected with MIDI object find buying
unique ID and then you can ask that for
its name but it's probably best if you
use the C++ class in our SDK see a MIDI
endpoints and if there's bugs in it like
in my demo I don't know if that's in my
demo or and this is the SDK codes in any
case that's a really good place to start
it'll show you the sequence of calls
there's a few strange cases to deal with
and it'll give your users the names that
they expect to see so the other best
practices issue I'd like to cover is
midi timestamps I've noticed that there
are some applications that send all
their outgoing many with the timestamp
of now which means that by definition
that's going to be late by the time it
gets to the hardware not very late
necessarily but there's some good
reasons not to do that because well one
I've got mentioned here on the slide is
that there is an internet Engineering
Task Force proposal in the work for
doing MIDI over IP and with networking
midi you know the timestamps are going
to become really important because we're
going to see more more jitter and
latency than we would just on a normal
local mini network the other is that
we're starting to see applications or
contacts where people want to use
multiple applications on the same
computer and synchronize them together
and you can send a MIDI time code or
MIDI beat clock very efficiently between
applications using the IAC driver which
was in
in cancer and those events are time
stamped then the applications can
achieve really good synchronization
between them if they're not then in
application a maybe sending its time
stamp and it's ending with no time
stamped now and it's going to be a
little bit of propagation till the time
it gets to application be maybe the only
a couple hundred microseconds or
something but that's not going to
provide totally accurate sink so there's
no reason not to be using time stamps
when you schedule and paying attention
to time stamps when you record now there
are a few applications that might want
to do MIDI throwing in real time and say
well I just need to send everything as
soon as I get it and that's okay I would
just suggest that you measure your
performance and if you see that you're
getting more jitter than you like you
can add a little bit of latency you know
safe okay play this two milliseconds
from when it came in for example and
that should smooth out most of the
jitter that you'll see one major area of
new features in Tiger force is this API
set called karate o clock it's actually
in the audio toolbox framework but it
touches on normandy in a lot of life so
it's being discussed in this session so
if your app has any kinds of
synchronization needs especially
involving midi time code and midi beat
clock whether it's coming from an
external source or you want to think to
another application this API will
provide a lot of the grungy code for
dealing with those MIDI timing formats
it also does some other time conversions
interpolations between various formats
as we'll see in a minute and if your
application is already used in the music
player API s and the audio toolbox this
hasn't happened yet but those will be
put on top of well there will be a clock
object and better than the music player
so if you're using the music player then
you'll get the ability to send and
receive many times good for free modulo
whatever user interface you need to put
on top of it and so yeah what the clock
does it manages synchronizing your
application between audio and MIDI time
code or MIDI clock it's an extensible
internal architecture and at some point
we may add other synchronization sources
and it's got math for but it does all
the grungy mass of dealing between 70
time formats synchronizing as an audio
devices time base and samples you know
how it deals with seconds along your
applications media timeline and if your
application has a concept of musical
beat time it will convert between
seconds and beats and those other
formats so this just illustrates kind of
what's going on under the hood and in
the clock in the top blue line we have
your applications media timeline so 0
being the beginning of time for example
and and the green boxes we have your
applicant what we have the hardware
reference timeline and so what the clock
is doing is maintaining the series of
correlation points between those two
timelines there's this idea of a start
time if your application is starting
itself internally then you know the user
might decide start 40 seconds into the
piece and so you set the start time and
that's where playback begins if you are
an external sync mode the start time is
the timestamp for example the first
middy timecode message was that was
received what's the point at which sink
was achieved and time begins moving and
then as time continues to move the clock
object continues to take new anchor
points referencing media and hardware
times and then performs all subsequent
time correlations and conversions using
those anchor points so here we see
diagrammatically how the different time
formats relate to each other on the left
in green we have the hardware time
references the host time basis as used
by the but everything's based on and the
core audio api is in committee there's
the audio devices sample time the hell
does the correlation between the host
and audio times and on the right we have
different ways of expressing time along
the timeline the blue boxes so seconds
is the main way of describing these
times floating point number and from
seconds we can convert to be if you
supply tempo map and we can apply a
sympathy offset to get to a simply time
in seconds excuse me and the gray boxes
on the right this just illustrates that
there are some auxiliary API sand the
clock for converting between beats and a
display textual representation of beats
and similarly with 70 seconds a waste as
in simply seconds can be floating point
we're actually seeing example here of a
simply time that's going out to 80 bits
per frame or however many bits you want
actually and the circle in the middle
indicates how the clock is correlating
between the hardware and timeline times
using variable play rates that you can
set you can say for example played twice
as fast ok so with those concepts I'd
like to just give you a quick look at an
application that uses the karate o'clock
ok so this document shows pretty much
everything that's in a clock object
except what's below this line here is an
audio file player and I'll show you that
in a minute and so if I click go then
time starts moving I've got a simply
time over here on the left and a bar
beat time on the right I change the
tempo you can see suddenly we're a
different bar beat time and moving twice
as quickly and I can create a second
clock object and I'll have this one send
MIDI time code to the IAC bus I'll have
this one and receive many time code from
the IC bus and I'll apply them and their
think ok let's go have a little more fun
here and we'll have this one play an
audio file next one so I can vary speed
this clock and this one's following
along and so I did I got all that
running I thought that's pretty cool so
what happens if I have two of these guys
playing audio files together so now
these are both playing the same files if
some things together with MIDI timecode
getting sent over the IEC bus and how
close together do these clocks really
stay it just turned down the volume here
and I'll go to my favorite portion of
the song
that's the little confusing because
there is a very stays down there I'm
going to get near the high hat over the
left in any case according for my
mascara these two players despite being
yanked around with various speeds are
within one or two frames of each other
so that's the karate o'clock yes okay so
next Jeff no more is going to talk about
in your new features in the karate oh
hell so today I'm going to talk with us
doug said a couple about a couple of new
features in the karate oh wow the first
new feature that you'll see is our new
io data format supported by the hell
including variable bit rate i oh and
also non audio data like sideband data
such as timecode and control data and
other things that aren't actual audio
samples and the other new features I'm
going to talk a little bit about today
is the aggregate device support so
currently when you're doing io with the
hell you're always in your i/o proc
you're always going to be moving the
same number of bytes either you're going
to be getting you know X number of bytes
of input or you're going to be providing
y number of bytes of output and further
the mix ability of the streams as always
controlled at the device level meaning
that if you want to switch to a non
mixable format you have to tell the
device switch to a non mixable format
and all the streams are switched to that
way and currently we support linear PCM
data and IEC 69-58 compliant streams and
that's that's pit if / the alphabet
impaired that includes data formats like
a c3 mpeg-1 mpeg-2 and other things that
can be smashed into a format that can be
sent over that digital interface so now
in Tiger we're adding the ability to
move a varying number of bytes
through your IO proc when it's called
you for this is important for formats
such as it as a CRI ac3 where the number
of bytes per packet varies for each
packet and you're going to be told that
through the audio buffer list the M data
byte size field and you'll need to be
make sure you're always paying attention
to that field and aren't just assuming
that it's constant anymore and on output
and you you have to make sure that you
tell the how how much data you're
supplying and you can see in this code
example a very simple iope rock that is
doing exactly that it is iterating
through all the output but audio buffers
in the provided audio buffer list and it
is stuffing some vbr data into it and
then telling the how how much data is
stuffing in there by assigning back to
the M data byte size field of the audio
buffer so now with the new io data
formats this basically is around other
non mixable formats and with these you
have to be able to to go sigh have a
mixable data stream side by side with a
non mixable stream consequently you're
going to have to be dealing with mix
ability now at the stream level as
opposed to the device level and you can
find out about the mix ability in two
ways either by the mix ability property
or you can use the format information
that's provided by the how in particular
the audio format flag is non mixable
will be set in the M format Flags field
for non mixable formats now variable
bitrate encoded formats are also now
going to be supported for input as well
as output previously they were just
supported for output only and this is
going to be including you know as I said
raw ac3 data as well as raw MPEG one and
two and any other variable bit rate data
formats that are out there and this is
can also be used to transport non audio
sideband data for example time codes
such as Cynthia coming into the hardware
or word clock time or other
forms a synchronization and it's also
good for real-time controller
information for devices that support
that so before I talk a little bit more
about before I talk in that about
aggregate devices I want to talk a
little bit about the problem that
aggregate devices are there to solve
basically when you're sinking multiple
devices you have two problems to deal
with you have the different
interpretation each device has for what
the sample rate really is it's also
known as the clock drift problem and
then you have each device has its own
amount of presentation latency in it and
you have to solve both of these problems
if you want to do I owe synchronized on
divided on multiple devices so to solve
these problems you can use hardware
clock synchronization and that's where
you are running an actual physical cable
between all the devices and sharing a
clock signal among all the devices and
this can be done using digital audio
interfaces like a es or spit if or eight
at interfaces but you can also use
things like how sync black burst and
other more high-end video oriented
studio sync situations hardware sync
provides the best solution for the clock
drift problem since it's actually
synchronizing the hardware at the back
level so that you know the samples are
going to be within some very small
amount of time of each other but
hardware clock synchronization doesn't
solve the latency problem at all so in
addition to hardware you can also do the
resynchronization in software now doing
it in softwares in order to magnitude
more complicated than it is in hardware
because you need your software needs to
be able to judge how fast each device is
running with with respect to each other
and then compensate accordingly and you
can you use various kinds of resampling
techniques to do that but you still need
to compensate for the latency
differences even when you're doing
software sink so aggregate devices are
the house attempt to solve all these
problems in a way that makes it useful
for
for your application it gathers together
a number any number of disparate audio
devices on the system into a single
cohesive unit for audio i/o and it will
perform synchronized i/o to all those
sub devices regardless of what their
sink situation is they can be hardware
synchronized they can be software
synchronized and how we'll still be able
to deal with that and to do this of
course it solves the problems I was just
talking about of the different amounts
of presentation latency and it does the
clock sync the clock drift compensation
so the user can create aggregate devices
that are global to the entire system in
audio MIDI setup and I'll show you in a
few minutes about how that works
applications can also create audio
aggregate devices that are either global
to the system or local just to that
process and that's done programmatically
through API calls in the house and then
the hell will also on its own create a
global aggregate device for each i/o
audio device that has a multiple that
has multiple I audio engines in it for
example USB audio devices that have both
input and output will now appear as a
single unified whole so you can only
aggregate devices that are implemented
by IO audio drivers so some of you may
have heard me talk about how you the
need to write an IO audio driver rather
than a user land driver this is one of
the benefits you get by beat by doing
that you just put you get to play for
free in the aggregate device world so
further all sub devices in an aggregate
device have to be of the same sample
rate and of course the sub devices can't
be hugged by another process and all
their streams have to be mixable so when
you're looking at an aggregate device
and it's sub devices the ordering of the
sub devices that you set up either
programmatically or through the AMS UI
is important and it determines the oil
ring of the streams in the aggregate
device that you see in your IO proc so
for instance if you have two devices
device a and device be divided and in
that order devices a streams will be
come before devices B's and when you
look at them in your IO proc and further
aggregate devices will retain knowledge
about the subdiv Isis that are that they
a great even if the devices aren't
present or have been in deactivated
because of some format conflict and
further missing devices or devices that
are in the wrong format will
automatically just come back into being
in the aggregate device when their
situation is updated so each aggregate
device has a master sub device the
master sub device defines the overall
timeline for the entire aggregate device
this means that all the time stamps you
see from the how when you call audio
device translate time or in your i/o
proc are going to be defined in the
timeline of the master device further
all the timing numbers that go with the
aggregate device are the ones reported
for the master device for instance the
latency and safety offset figures are
that of the master device and the final
job the master device serves is to
provide the frame of reference that the
how uses to judge clock drift in the
other sub devices in the device now when
you're picking the master sub device
obviously if you have a hardware sync
situation you already have in Hardware a
master a notion of a master clock and
the device that corresponds to that
clock should also be the master device
in the aggregate now barring that you
should always just look and set the
device that has the most stable clock
now you can you can kind of guess at
that by what transport type the device
uses PCI and firewire devices for
instance tend to have much more stable
clocks than USB devices so now to deal
with the differing amounts of latency
and the various sub devices the hell has
to go through and look at all the sub
devices and fig
you're out what's the maximum amount of
latency there is going to be in a4a each
sub device and once it finds out which
sub device has the most latency if then
we'll pad out the latency of the other
devices by patting the safety off set of
devices so that they all match and here
you can see a little diagram showing
three audio devices now device c has the
most combination of latency and safety
offset and you can see how device a and
device be are getting patted out so that
they all come out equal now this is
really important to do this padding
because that's what ensures that that
but all the devices start in synchronous
in synchronization with each other
without that you'll be all skewed all
over the place and you'll never be able
to achieve sync so once you've dealt
with the latency you also have to deal
with the clock drift now aggregate the
mice is in the how will work regardless
of what the clock domain situation is
for each device so whether it's hardware
synchronized or whether it needs to be
synchronized in software the house game
for doing it now for each sub device you
have in an aggregate you can set
independently what kind of clocks or
compensation to use now there are going
to be three basic versions of it in
Tiger first there's no compensation and
that's that's the method you're going to
use for hardware sync situations because
when you don't have to do anything it's
already in sync and then there's going
to be a very low CPU overhead sample
dropping doubling algorithm who's going
to be there to account for that one
sample of drift over five hours of time
that you're going to see and that's
going to be little CP very little CPU
but with the potential of if it has to
run often to to make the to make the
synchronization happen of getting some
audio artifacts and then the how also
uses the same high quality resampling
algorithms that are in the audio
converter to do the full bandwidth you
know quality really matters style of
resynchronization and just so you guys
know the software synchronization is not
in the seed
ads today that will be coming we hope
sometime in the future so now you know
what agar devices do do now there are a
few things that they don't do they don't
provide controls they don't do volume
mute data source selection and all the
other sort of little doodads that you
get on a regular audio device and and
the reason for this is simple is that
aggregate devices are there to be an i/o
abstraction they're not meant to be kind
of the system console if you will you
should always go back to the original
the original device to do the
manipulations of volume stream format
and other things like that now aggregate
devices also can't be hugged that kind
of plays into the fact that they can
also be none they can't be non mixable
either and finally an aggregate device
cannot be any sort of default device so
if you're going to use aggregate devices
you have to provide the means for your
users to select them for your engine
they cannot be set as the default
because mostly that's to shield
applications that don't want to have the
impact the performance impact of running
on an aggregate device from it just
inadvertently seeing that so now I'm
going to show you a little bit about how
aggregate devices work
so what so in Tiger in AMS they're going
to be there's a brand new dialogue that
allows you to configure the various
aggregate devices now i'm here on it on
at 15 inch powerbook and I've also
brought with me a bunch of other kinds
of devices just to show you kind of how
it works the first device i have here is
a let's show you this one first it's a
edirol you a three USB interface and has
input and output it's stereo now the
reason this one's interesting because
this will show you what happens when the
hal creates an automatic aggregate
device since this device has 1i o audio
device that has to iowa do engines you
plug that thing in and up at pops so you
can see an AMS by looking at the icon
what kind of device it is you can see
the UA three the normal you a three is
here and you can see all its controls
and stuff and now we go to the aggregate
you a three now in a and that's kind of
shield you from some of the
implementation details but you know
those of you that have been fighting
with this for a while know that it this
is really when you look at the ua three
in AMS you're looking at two independent
audio devices and so you have to run to
separate I up rocks in order to do I owe
with that with to just do pass through
io for instance now with the aggregate
you a three USB device you can now run
one I up rock and do pass through and in
place processing and all the things you
would have had to do more complex
management before now I've also brought
a echo indigo pc card just kind of show
you something a little more exotic it's
a little stereo to channel device and it
pops up and you know it's it's all good
so now I'm going to make an aggregate
device out of them so you open the
aggregate device editor and you start
out with no aggregate devices that are
user made the automatically generated
aggregate devices don't appear in this
dialogue because there's nothing you can
really configure about them so you click
on the plus button to make a new one you
can give it an inch
name whoops and then you go in and you
can see down below all the devices that
you have that you can add to the to the
aggregate now I'm just going to click on
all of them and and then once you have
them click you can move them around
because the order is important so you
can drag them around and then you can
use the the Rick clock radio button to
select which is going to be the master
device and that's pretty much it and you
can see now that we've created this
aggregate device it now shows up in the
pop-up for did should this is it the
name is an updated that's a bug too so
but there you can see you can see now
this aggregate device has you know for
input channels and for output channels
and you know and it's good when you do I
go with that you're going to see you're
going to be doing synchronize diode
across all the devices that were in the
aggregate and that's pretty much all
there is to it next up I'm going to be
bringing up and we can go back to the
slides I'm going to be bringing up nick
thompson and he's going to talk about
driver development an osm aggregate
devices are really cool so I want to
talk about a developing audio hardware
devices Mac os10 went to look at the
kinds of devices you might do for built
and hardware of a span of Macintosh and
I want to look at how you can expand on
the built-in capabilities that you find
in every Macintosh computer by using
high-speed serial interfaces you'll have
noticed that music and audio have become
very important to apple and what we're
talking about in this part of the
sessions how to get audio in and out of
your computer so
the key thing here when we're thinking
about this is when you develop a product
you want to hit as many people as you
can so USB and firewire if you're
developing an expansion product or in
every Macintosh computer that we ship
today there's also good opportunities
for using the built-in our log
connections with the input and output
for audio peripherals when you look in
Darwin at the driver stack you'll you'll
see a bunch of things the two things you
really want to look at our Apple onboard
audio and apple USB audio apple USB
audio gives you kind of a basic template
for how to write a USB driver the
on-board audio is a little more complex
because there's a whole bunch of stuff
that we have to do so when you're
looking at the source code you'll
actually see a whole bunch of plugins so
I recommend it for you if you want a
template audio driver USB or the
examples in the karate Oh SDK for the
phantom audio driver of the place to
start so the technology is available at
basically you can use the stuff that's
built in and that's usually analog but
we also supply digital output on power
macintosh g5s but basically you're
limited to two channels if you want to
go multi-channel you're going to need
some kind of audio peripheral and if you
look at the list of things that you can
do here pc cards pci i mean they're good
solutions but you're only going to head
a subset of the market and if you're
going to develop for the platform it's a
good idea to hit the most people that
you can so we really recommend USB and
firewire for the development of audio
peripherals so we kind of think about
audio devices in this kind of continuum
you know for one of a better word of
audio devices and you know the consumer
level we kind of see built-in audio you
know it's cheap to do an analog for all
and you can do some really attractive
things with that USB is a good approach
for hobbyists you're a little limited in
the number of channels that you get but
you know it's a good connect solution
and we look at firewire as being a kind
of prosumer prosolution so let's kind of
dive in and take a look at what's
available for built in basically you've
got two kinds of things that you'd be
looking at here input devices output
devices or little mix's
they'd the analog connect the codecs
that we ship in Macintosh computers have
great noise effects and they actually
measure better than a lot of USB devices
out there so this is a good way of
actually connecting peripherals if you
only need stereo the other thing that's
important to think about is optical
spdif which is available on the power
macintosh g5 you can divide peripherals
that can do things like ac3 if you want
to do multi-channel data by encoding the
stream you can also use this for getting
audio in and out of your computer
digitally at a very low car so consider
but if you're looking at development of
this type of peripheral so kind of
summing up the bill 10 is basically
stereo and there's also optical support
so there's some good opportunities there
for peripheral development moving on
kind of the USB the important thing to
emphasize here is if you develop the
device that conforms to the USB and
support USB implementers forum
specifications for audio devices it's
just going to work with apples driver
this is really important because you
want to minimize your development costs
when you're bringing your device to
market so always consider trying to make
a class compliant device if that's
possible for you we put the audio class
driver ndola and it's open source but
what we've seen in the past is that
developers will take a drop of the
diamond source code start working with
it we gotta fix a bunch of bugs and
prove the parlor out new features and
that kind of stuck on this two-year-old
source page so if you can develop a
class compliant device it's going to
work with our driver it's done with a
driver please let us know and we'll fix
our driver the other thing I wanted to
call out here is the audio device to
point o spec the current spec is
believed the audio device 1 point 0 spec
and that's several years old now
there's a device working group working
on a two-point OS bag and we're tracking
this work we're studying it if you're
developing a USB 2.0 device please talk
to us because we really want to make
sure it works with a drive sir the price
drive is full featured one of the things
that we want to call out is there's
actually a small API and therefore doing
DSP plugins and that may seem kind of
weird because you've probably heard a
lot about core audio plugins this is
kind of a different thing this is if
you're for example making some speakers
and you have a proprietary base
enhancement algorithm for your
loudspeakers and you don't want
everybody to get access to that code you
can use the plug-in API to basically
match to your device and only your
device so that your code will only run
with your device and that's an important
thing so summing up on USC basically we
see USB is a very good solution for
consumer applications and low-cost
applications but the thing that
problematic about USB is the
customization of your device now should
require a custom driver it's a lot of
work the other thing is that the
bandwidth for USB 1.1 devices is
relatively limited you're looking at
basically eight channels of input or
output but it's a good solution for you
know a limited channel count and you can
also do MIDI support the thing I really
wanted to dive into today's fire one
we're really excited about fire if
you're developing a firewire audio
device you essentially have multiple
options you can develop a custom device
and a lot of developers have been very
successful with a custom device the
problem with developing custom device is
you've got to figure out the protocol
for getting data to the device you've
got to write firmware for the advice and
you've got to write a device driver to
the device now the people who've done
this have come first to market and
they've got great product but it is
something to think about you know when
you're considering how to implement your
device a better way to go is to develop
your device according to some of the TA
specs on there and there's really two
specs that you want to look at the audio
sub unit and the music subunit
they kind of overlap and it may actually
be necessary in a music subunit device
to have an audio sub unit so that you
can get at some of the controls such as
volume controls the other thing we
stress is if you're developing a philo
device join the 1394 trade association
the umbrella organization for people
developing fowler devices they look up
to some of the spec and there's a lot of
good information there that particularly
in terms of getting access to draft
specification however you know we
recognize that there are some challenges
in developing a flower audio device and
we kind of looked at why it was
difficult to do this and why there
weren't more philo devices out there we
really have falls into three main areas
in terms of the province there's a lot
of standards if you go to the 1394 ta
standards page you know you can't spend
10 or 15 minutes trying to figure out
how everything fits together and then
you can spend several weeks actually
downloading and reading all of those
bags also compared to the USB the coast
of silicon has been perceived as really
high we're going to talk about that and
I'm actually going to bring up a couple
of vendors that we've been working with
who've been looking at much lower cost
solutions then the other difficulty is
the software development you've got to
to kind of areas here the problems in
terms of developing the firmware for the
device and also the problems for
developing device drivers so let's try
and clear some of this up in terms of a
roadmap for standards relating to audio
devices this slide shows the kind of
things that you're going to be
interested in really 3094 defines the
base illogical spec and packet form out
on the bus and the 61 883 space kind of
cover streaming in in a sense when
you're dealing with you divide the stuff
that you're going to be sending and
receiving from the device falls into two
areas I so kind of transfers and
asynchronous transfers isochronous
transfers a guaranteed bandwidth and for
every 1394 packet a certain amount of
that packet on the bus is reserved for
isochronous data asynchronous is data
that you kind of want
get to the device and you want to have
it acknowledged but it doesn't
necessarily have to go right now it
turns out use I soak the streaming MIDI
and audio and it turns out that
generally use a thanks for querying the
capabilities of the device so this slide
can you actually read this this like
kind of covers the specs that you really
care about at the top audio and music
subunit you kind of need to decide which
is most appropriate your device
generally audio sub unit devices are
simpler devices the appropriate for
speakers they're appropriate for simple
y ou devices music subunits they usually
devices where you have a number of audio
strengthen you also on one to embed MIDI
data so when you're considering
developing a fire already a solution is
essentially three main component we've
talked about the hardware the software
the hardware the firmware in the device
driver some ways basically what's going
to run on your embedded system the
device driver is what's going to run on
the Mac to communicate with your device
the key point here is if you develop
firmware that expected compliant you
don't have to do device driver work and
that's a really big issue in terms of
the coast of development of your product
so let's look at some of the resources
available for developing audio devices
based on follower there's a number of
silicon vendors out there who have
products they range from you know
relatively expensive to relatively
inexpensive recommend that you do some
research and have a look at a couple of
vendors when you're choosing a solution
brusco were pretty much the first out of
the gate shipping standards compliant
silicon and they have a solution called
the DM 1000 which is in a number of the
devices that were announced earlier this
year at now the interesting thing about
bridge goers they have licensable
firmware which can be customized for
your application and we've been working
with a number of vendors who are
bringing their products to market based
on the brush Co solution first one out
of the gate with the Roland FA 10
cool illustration of it up here at nn10
out device with MIDI support a number of
other defenders have they have announced
that our support of this platform and
this platform is is basically music
subunit compliant to talk about an audio
sub unit compliant devices late to bring
our James Lewis from upset semiconductor
talk about their product oh yeah thanks
Nick we're here today to introduce some
technology for bringing a very low cost
solution to firewire audio for
multi-channel applications you can also
see this on our booth downstairs and
that plugfest later on in the week now
Oxford semiconductor has a very strong
background in firewire technology
through mass storage chips so most of
you probably heard of us and we're a
very strong adhira to 3094 standard and
also through our position on on the 3094
trade association we're actively
involved in developing new aspects of
1394 standards so we're going to give
you a bit of a technology introduction
into the chip and a demo to finish up
and I'm going to hand over to Andy
Parker to do that thanks James
okay so uh we're all developers Oh could
we go back to the slide please maybe not
could we have the slides please thank
you so we're all developers what we're
really interested in is them is what's
in the box and the first thing we tend
to inspect its block diagram the real
thing to to take home from this picture
is that most of what you get on the 970
is is actually contained within the
device and if you want to implement an
audio sub unit which you can connect on
to the firewire or 3094 as we call it
bus you only need really a handful of
components and in this case we're
talking about an external physical
interface for the 1394 and also at the
back end and i squared s audio interface
in terms of the data flow from the bus
to the output we we basically have a
very short path from the link layer
going through a cue selector which
basically filters out isochronous may
synchronous data that Nick talked about
earlier through a 50 which is just a
small buffer and then out onto the audio
core whether an interesting application
example which is basically looking at
multi-channel audio decode and and in
this particular case we have a
compressed stream arriving over firewire
and is being transferred by the 970 and
through a hardware decoder and then
passed out to a multi-channel audio d to
a converter stage and this would be so
typically applied for replaying surround
sound on your on your system it's it's
an interesting application because it
exploits one of the more interesting
features of the 970 which is that it's
quite flexible in terms of the content
that you pass it the firmware actually
transfers the data from the isochronous
side to the eye to eye dashboard so
whatever that data is provided its
compliant with standards you can then
transfer the data over and match the two
formats we provide a developer kit which
is next talked about before implements
an ABC audio sub unit and it uses the
standard AVC command set to control the
monitor the audio prophetess that things
like mute settings volume control and
they also decodes the incoming
isochronous data which is again
compliant to they make two full
specifications it implements clock
recovery which is basically just
matching the rate of data that comes in
to the rate of data going out because if
you don't do that you'll get strange
distortions and it works with the
existing macro Stan firewire audio
driver so the firmware development we
use standard open-source GCC toolchain
and and you can you can even develop
that on on the Mac itself which sport
most popular host development systems
and we can also provide with the
framework to basically customized
firmware to match your specific codec
which is sat on the back end and there's
a full reference design schematics and
evaluation board available in terms of
the firmware itself and the whole thing
the important thing to note is the whole
thing on the comes to less than 64
kilobytes terms of the operating size of
the program and when we've crammed quite
a lot in there so we have michael's 1064
k I think that may be a mistake we have
an operating system layer which is not
like us then and and that just basically
provides a little low level startup
codes for the processor that we have
embedded within the 970 and then we have
a standard 1394 firewire for the rest of
this API and some cute elected
configuration and again all the key
selected does it is just filtering out I
thought
a synchronous traffic and then we have
some higher-level handlers which are
handling the standards compliant data
and generating the right responses to
keep the mac side happy and compliant
with the ABC specs and I think we now
have time for a quick demo could be roll
the demo please
[Music]
so this is generating the surround sound
over firewire being decoded on the 970
and played through the PA
you
[Music]
I
thank you the ability slower guys do
Center in and play unreal tournament all
day that's the office aboard their EVM
board and we'll talk about this in a
second but we'll we'll have both of the
the solutions that we're going to talk
about today available for you to take a
look at and the lab will talk about that
at the end of the session so one of the
difficulties we talked about with USB
devices is customizing a device one of
the really cool things about firewire is
it makes customizing device behavior way
simpler and the way that you can do that
is to Sandy bc commands to the device so
I'm going to talk you through a quick
example not particularly realistic to
say we're sending a volume command and
usually you'd rely on Claudius and the
volume command by the apple for audio
device driver but it's a good example of
how you can customize behavior your
device so there's actually a user client
in the firewire audio drivers that
allows you access to various services
and you'll see in this example that
these services are preceded by the fwa
prefix so you can go count the devices
on the bus open one of them check its
vendor ID and get its device now now
this obviously isn't a particularly
realistic example of how you would match
to your device so I would suggest that
you go look at the firewire SDK so much
more comprehensive examples of device
matching but it kind of illustrates the
point the really cool thing about doing
device customization this way is you
don't have to write a kernel device
driver and your customization code
resides in user space so you don't have
to deal with kernel panics if you screw
up and it's going to help you to debug
your code to send the command for the
device you set up a command block and
this example shows how to set it up for
an ABC volume command and then to send
your commands you simply call the
execute ABC
making sure that you check that your
device actually accepted the command an
example of using the flower with your
client essentially the m-line
implementation and one support and Mac
os10 the Apple driver creates and
manipulates 60 180 360 ream but yamaha
supply an application which does the
Vice discovery and configure the network
will run the subject of amylin some of
the enhancements coming to a future
system update multiple device support so
you're going to be able to use things
like a know when X with mo te eggs and
also external sinks of the devices can
stink consent to an external clock one
of the things that I really want to talk
about today is the reference
implementation of ABC music and audio
devices Apple have been working on this
and we're going to release it later in
the year as part of the farm a reference
platform we're going to provide an audio
sub unit reference and a music service
the way that this is all going to fit
together is you have all the stuff that
runs on the Mac on the device on the
device you're going to run some firmware
this is based on a real-time operating
system are tough and on top of that
we're layering the Apple photo reference
platform and the Apple reference music
subunit will sit on top of that
the other thing that's necessary when
you're developing a device is the
ability to update the devices firmware
there's really two side service the
firmware on the device needs to be able
to accept an incoming wrong image and
you'll send in an application that can
send the firmware image down to the
device most people have firewire devices
expect the device to be updateable over
firewire the other resource when you're
developing your device of the tools from
the amplifier SDK matter folks that
don't develop a device without it
there's some really cool tools on the
fire SDK Firebug is a packet sniffer and
ABC browser will allow you to look at
the device describes of your EBC device
so I'd like to bring up your own Solomon
is the general manager of the consumer
electronics connectivity business unit
ed texas instruments to talk about some
exciting things that we've been working
with TIR thanks sir that's only half of
my title if we used the entire title I
would run out of 5 minutes I have Nick
thank you for having me here in
California anybody else to lose some
taxes here today yeah we just left a
whole month minus five days worth of
rain in tornadoes and everything believe
me you're not missing anything anyway
I'm going to spend the next 20 minutes
got your three minutes talking about
what is it that we're offering we've
been working with Apple for the past
gets several months more than just
several months developing this platform
that you can take off the shelf and we
make it available right now and work
with the SDK it has been a quite an
experience it's kind of fun taking the
devices if we typically put in just
those standard boring type end products
and finally put them on a kind of fun
device some of the am I going backwards
or yeah I'm going backwards
I'll stay on my title again okay I
already told you about the cooperation
with album Texas Instruments has been
focused the 1394 development to some
extent in audio products where they have
a lot of sensitivity to timing and all
kinds of specifications you know one
thing three things I can promise you
this presentation I'm not going to get
too technical that's 12 is I don't have
a demo and the third thing is we don't
have mac OS in 64k either our USB
devices we have USB devices especially
for audio applications you can see them
whether it's audio DAC s ABC's
controllers codex therefore as Nick said
kind of the mid-range lower end cost
sensitive type solutions firewire is
really where we shine Nick focuses focus
on that we focus on that we have a
device that you're going to see in the
next slide called I see links or as we
like calling it tsb for 3 c.b for 3a and
that's not including the package that's
really a relatively high quality device
that you should see it's reasonably
priced the boards ethnic talked about is
the board that's going to be
demonstrated in the lab tomorrow the
device that I'm going to emphasize now
this is the IC links right top corner is
where it's really at this device is
one-stop shop for everything 3094 to a
relatively high quality video audio
platform that's all I have thank you
thanks Sheriff so we're absolutely
delighted to be working with ti on this
and it basically gives you a low-cost
development platform for adding
high-speed serial into an existing
device we see this applicable to things
like sense of synthesizers digital
musical instruments digital effects here
knows
kind of a microtome the key thing here
is by basing your audio device on
Apple's music subunit reference firmware
you're going to really reduce your cost
because there's quite a lot of effort
and actually just doing the firewire
firmware so by adopting this solution
you're going to be able to very easily
work on the parts of your product that
differentiates in the marketplace rather
than doing infrastructure work and we
think this is very important the other
thing is you'll be able to work without
a vice driver and this is going to save
your development time on the driver side
so summarizing fire audio it's a big
pipe there's more bandwidth than USB
there's a number of other advantages as
some of them here so flower wise a good
solution where you need a lot of
bandwidth and all of channels got MIDI
support in there and it's extensible by
using ABC vendor specific command so if
you're considering producing in your
device or adding high-speed serial turn
as an existing device we really
encourage you to look at developing with
firewire you know we've got resources
available from multiple vendors that are
far lower costs and the products that
are currently shipping and we're going
to save you money in terms of developing
time on your driver set that kind of
covers our continuum of audio devices so
just to summarize there's a ton of
opportunities out there you can look at
low-cost GarageBand peripherals you can
look at very high end by our solutions
for the music naudia production
environment and everything in between
analog solutions are great for built-in
audio and macintosh computers and for
hobbyists and prosumer solutions we
recommend that you look at high speed 0
the key thing here is I really urge you
when you're considering developing a new
device look at standards based devices
you know that you'll be saving yourself
driver work and your customers will be
way happier so in terms of who to
contact about stuff I definitely
recommend if you're a hardware developer
that you build a relationship with Craig
Keithley he's a ir technologies
evangelist his email address is here
there's also a great mailing list with a
lot of traffic on it a lot of very cool
people both inside Apple and outside of
Apple answering questions on the
corrodium mailing list and there's
mailing lists available for firewire and
USB developers
if they have a fairly considerable
reference library there's a good bulk of
information about core audio and
developing device drivers on apple's
website I'm generally a good jumping-off
point is developer.apple.com / audio and
there's a reference to most of these
resources there there's a core audio SDK
which you should definitely be looking
at from developing at an application
point of view and this sample drivers in
the choreo SDK and the audio web page is
there if you're developing a firewire
device you should also look at an apple
spyware reference platform because they
can save you a lot of time and effort
and firmware development Wow the other
thing is the thing at the bottom is the
important thing on this slide and
there's a couple of trade groups USB
implementers floor on 39 40 a check
those out if you're developing either a
USB or a fire way to vote and then
finally tomorrow noon the audio driver
team will be available in the graphics
and media lab will be showing the ti
board and will I think have ops dogs
would represented there so you can look
at their board hopefully we'll be able
to answer any questions you have about
developing somewhere or device drivers
for your devices so thanks a lot