WWDC2004 Session 621
Transcript
Kind: captions
Language: en
I good afternoon my name is master just
one of the Alliance managers in our
world wide developer relations group and
i'm here today to introduce to you
sybase and some of their great partners
and customers to talk a little about
session 621 sybase ase and the
replication server new product recently
released in april that we're very
excited about for the Mac community for
those you know or don't know sybase
actually been on a platform for a couple
of years now and has had a very
successful run with mac OS 10 and
specifically with the xserve has some
really great named customers and in
partners on their platform so we're very
excited to have them up here with us
today and like i said steve olson
director of development our director of
engineering from sybase going to come up
and speak to you for a little bit and
then introduce some of his great
partners and a customer actually Apple
as they as one of our proof points about
how great the technology is and how
excited we are to be working with sybase
so that I'd like to introduce Steve and
thanks Mac okay thanks Matt and thank
all of you for coming today I appreciate
your time I realize I'm standing between
you and your evening plans so hopefully
this will be worthwhile and and I also
want to thank Matt and our friends at
Apple forgiven this us this opportunity
to talk about what has become our
favorite subject lately which is sybase
on OS 10 so what I'm going to talk about
there is we're going to talk about
adaptive server enterprise which is our
enterprise class database that we've had
since the early days of sybase and the
replication server 12 dot 6 I would
stand a little bit about what
replication is and then talk about the
product itself we'll talk a bit about
the optimization we've done for g5 some
of the performance results and we'll
also talk about some compatibility with
Microsoft sequel server and the effort
it might take you to move from see
server to a SE and then later towards
the end I'll introduce various partners
and customers that have had successful
deployments of applications using our
server on OS 10 again we'll talk about
ase and g5 migration and customer
successes who is sybase well sybase was
founded in nineteen eighty-four and i
think most of the folks at apple and
friends of apple would agree that 1984
was a pretty good year the year who can
forget the super bowl commercial of 1984
well it was also the year sybase was
born and later in that decade our first
release was made in which was
essentially a database designed
specifically for online transaction
processing the founders of the company
came from a company called Britain Lee
rigidly had a product called a database
machine which is specialized hardware
and specialized relational software all
together in a single package proved to
be only slightly successful because it
was very very expensive so the founders
of sybase left britain league and
decided to build a software version of a
database machine and that became the
sybase sequel server and its first
release was later in the decade since
then the product has been adopted very
successfully by a number of financial
institutions primarily on Wall Street
the new york stock exchange for example
runs its transactions and stock trades
driven by Smaug the sybase adaptive
server enterprise we also have
significant presence in a number of
other market segments including
telecommunications healthcare and so
forth now most of our customers are
using very very large hardware
configuration 16 CPUs 32 CPUs whatever
and and these these machines are
designed to handle millions of
transaction
day so our code has been evolving over
the past years and has been optimized
specifically to handle this kind of a
load that same software that same
optimization that same evolution of the
product is now available on or west n we
didn't take anything out it's the same
enterprise industrial-strength software
that our largest customers are using and
it's now available on OS 10 our
software's also localized for just about
every language in every country of the
world available today is the latest
release of ASC called 12-5 to
replication server 12-6 this was new
just released in april of this year late
april we also have a number of clients
interfaces available for the mac JDBC
odbc our own proprietary API is called
open client for you to build a C or C++
applications using our native api's and
also we have a mobile and embedded into
database that has been very successful
in its market niche called adaptive
server enterprise or adaptive server
anywhere or sequel anywhere studio a
version 9.0 is the latest release and
that's available on OS 10 as well we're
we love the mac OS 10 operating system
and we are committed to continuing our
support and enhancing our support on
this platform over time focusing on ase
again i mentioned that we sybase were
founded in nineteen eighty-four our
first public release was in 1987 version
2.1 it was licensed to microsoft about a
year later and we did a port 20 s 2 and
we then established a business
relationship with microsoft that lasted
approximately 10 years so we have a lot
in common even even today after all of
this time there's still a lot of
commonality between the sequel server of
microsoft and ase we basically implement
a standard sequel language sequel 92
a number of extensions specific to the
oltp environment architectural II we
rely heavily on the UNIX services
provided by any given platform we also
are available on the Windows platform
but we're highly optimized for for unix
back in nineteen eighty-four when the
company was founded the first platforms
that we considered moving our software
to was vax VMS and Sons sun OS and if
you recall back in those days son OS was
essentially the first commercial
implementation of Berkeley UNIX so bsd
unix is very familiar to us so we're
very happy to see it again in a
commercial product it's an old friend in
a new coat so we rely heavily on it and
essentially what ase is about is it's a
unix process we call it an engine and we
build our own threads we manage your own
threads and each engine manages some
number of threads and its own scheduling
and so forth engine 0 reads a config
block and determines you know how to
allocate memory and then initializes a
shared memory region that is used in
case there's additional engines if you
have two CPUs for example you can attach
or configure a second engine if you have
10 CPUs you could configure 10 engines
so engine 0 creates the shared memory
region and all subsequent engines then
attached to it and the shared memory
region contains our data cache our
procedure cash for stored procedures for
for compiling ad hoc queries and it also
contains all of the system metadata
information about objects or scheduler
queues thread specific data and so forth
all of that is resident within shared
memory so we're very very happy this
week to learn about the 64-bit
capabilities of tiger because with a
32-bit application you are limited to
about a 4 gigabytes address space with
64 bits what it means to us is the
shared memory region could be
essentially as large as you can is you
can configure memory on a machine so
eight gig no problem recently we've done
a benchmark with major hardware
manufacturer involving 256 gigabytes of
memory so if Apple provided us with a
machine that could configure or contain
256 gig of memory we would be able to
use it and the advantage there is that
the more data we can put in cash the
less I oh we have to do and therefore
the better the throughput is going to be
as a general rule so engine 0 listens
for connections and then determines the
load that's on all of the engines and
dispatches to the engine with the fewest
number of connections already associated
with it to accept the connection so we
do some load balancing there each engine
it communicates with clients then
through berkeley socket and each engine
is capable of handling about 3,000 or up
to about 3,000 connections for a total
and on an xserve today of about three
above 6,000 client connections ok so
we've also added some extensions to this
port to take advantage of some unique
facilities offered by Mac os10 including
rendezvous when our server comes up we
register with rendezvous so that any
rendezvous enabled client can detect the
presence of this server in the network
it's a very simple interface but a very
powerful capability is enabled through
this interface so during initialization
or server comes up figures out as port
number etc and then registers with
rendezvous and makes its presence known
to the network we also provide a cocoa
based application which is essentially a
discovery tool that lists all of the
servers in your subnet all of them
available through rendezvous it's a
rendezvous client listens for
connections of a particular type
for services of a particular type and
also has a login panel so as you scroll
through these this list of services or
servers the information that is needed
to enable our client api's to connect to
that server is showing up in the login
panel the host name the port number you
provide a login name and a password and
then we pop up an interactive sequel
query window so you can then issue
queries the results are displayed in the
middle panel and then errors and
messages and so forth are showing up
image in the lower panel so this is an
interactive sequel application that
requires no configuration whatsoever on
your part it just all enabled through
rendezvous which is very powerful
capability open directory authentication
well ASC has recently implemented a
feature we call directory services
authentication and what that means is in
in past incarnations of our server all
of the user information was stored in
our system catalogs the login name and
encrypted password and so forth and so
you had to manage the passwords in
inside AFE you have to manage the
passwords in your operating system and
perhaps elsewhere what open Directory
authentication does is allow you to
authenticate a log in using the
passwords that might be an an LDAP
server Active Directory maybe Kerberos
or even Network Information Services or
the yellow pages depending on where you
prefer to store user information so we
can authenticate based on information
that's common to all of your services
and all of your platforms so this is a
very popular feature on other platforms
we don't use open directory we use pan
or pluggable authentication modules and
on windows we use active directory with
with the Panther release of the OS 10 a
very interesting feature was added to
the release on the OS x server which is
called server admin
server admin allows you to specify a
number of hosts that are available in
your network and let you manage and
evaluate and get a status of all of the
services on all of those hosts so in
this example we have four servers that
are configured in inside the server
admin tool and one of them has a
pulldown showing all of the all of the
services available on that machine so
what we have done is provided a plug-in
on both Lee's client and on the server
side for adaptive server enterprise to
allow you to manage ase through the
server admin tool and in this shot what
comes up is simply an overview it tells
you the status of a SE and I'm the
backup server that are running on this
particular host we also provide a
graphical interface that shows you some
of the characteristics we have in this
example network I oh the amount of
network I owe in terms of reads and
writes we also provide a response time
graphic we provide a graphic for disk
i/o and also for procedure and data
cache hit ratios in other words one
measure of the configuration accuracy or
optimization of your server is whether
or not your data cache is optimal for
the load and if you're most of your data
access is basically logical reads or rot
logical writes that means you're not
doing physical ioaded disks so the
ratios should be fairly high and we show
a graphic showing you the ratios of the
logical the physical reads and both the
data cache and in the procedure cash so
it's a very powerful tool and it gives
you an ability to get a status report at
a glance indicating the general health
of your of your server we also provide
some tabular reports that give you some
idea of the operational characteristics
number of users number of engines and so
forth there's a lot of information you
can get a quick read of the health of
your server
through this tool we also provide a for
for development purposes a system
preference panel so you can go to System
Preferences after you've installed our
server and pull up a sybase ase
preference panel and that will tell you
what's going on with the server whether
it's upper or not it's a rendezvous
client those green dots indicate that a
server is upper up and the red that
means there's a problem and if they're
not any color that means the server is
down you can look at the error log for
the server and you can also configure it
or revise the configuration of the
server so what's new in 12 52 is we've
done some optimization specific to the
g5 processor and i'll talk more about
that in a bit we also provide a sample
application it's an Xcode project that
illustrates the use of rendezvous cocoa
and open client it's based on the server
discovery log or query tool that I
showed you earlier and we provide a
sample application that illustrates how
all these things cocoa rendezvous and
open client can be used together in an
application we also provide our open
client api's in a in a Mac os10
framework so that you can embed the
framework in an application bundle or to
make it very very easy for you to deploy
applications throughout various clients
some performance characteristics what
we've done with a sa is we've done a
number of benchmarks based on the
transaction processing consoles see
benchmark and the benchmark requires a
particular schema certain tables a
certain layout of data and so forth and
then we we have a program that loads up
you know some number of we call them
warehouses and then we generate a load
we have a driver program that simulates
an actual TPC benchmark it's not a
formal benchmark it's it's a tool that
we use internally to determine whether
not a benchmark would be interesting to
do for a given platform and so we did
that for the g4 running on Jaguar and
there were some issues with that and
that's illustrated by the bottom number
the sixth out or six thousand
transactions per minute which is not a
particularly good number on a g4 or any
platform and I'll go into the reasons
why after we got the g5 we reran the
tests and on the width width with
panther and we got a two and a half fold
improvement out of the box without
having to do anything we did some
optimizations a few optimizations or
something you know some low-hanging
fruit and we meet almost double the
throughput again to over 27,000
transactions per minute which is getting
close to where we want to be but with
two CPUs we got 40,000 transaction which
suggested me there's some scaling issues
there so what that tells us is the
asynchronous disk i/o that was made
available in Panther is very very
effective we utilize that now an ASC and
it's it's a subject that's very very
important to our database code and we
use it very effectively and the
implementations is done very optimally
so the tests that those numbers came
from was derived from a powermac running
a 2 gigahertz g5 with eight gig of
memory we could only use about four but
it had eight gig anyway and we use an x
rayed with seven discs and restriping in
the in the raid volumes that we
configured so we found that a Singh disk
disk i/o was huge for us we had to do
some implementation re-implementation of
our spin lock when you have a shared two
CPUs two processes accessing the same
same shared memory their critical
sections of code that need to be gated
so that you don't trip over each other
and our spin lock implementation to do
that needed to be redone for g5 so that
was done and we got a boost out of that
we had some structure alignment
considerations to account for and g5
basically the g5 requires or is optimal
for 128 bite structure alignment and so
we had to revise that also we found out
through the use of shark which is an
outstanding tool for getting a quick
profile of what's going on with your
application we found that some of the
system functions in the sea runtime
library where we're kind of holding us
back and and so weary implement or
implemented some of those our own of our
own so that we could get past those
bottlenecks so we're not where we want
to be yet there's there's a few things
that we can do with the compiler
including feedback optimization feedback
optimization simply means that you you
compile with certain command-line
options and then run the run the product
of your application under a
representative load and then the the
feedback optimized server will collect
information about the use of routines
and some statistics about the running of
the server and then you can relink
feeding that statistic file into your
linking process to basically give you a
more optimal layout of your executable
and that's that's something we've done
very very successfully and it usually
gives us a fifteen to twenty percent
boost in in throughput on RT PCC tests
we have looked at the PowerPC compiler
from IBM and it's very interesting and
has some additional optimization
features over GCC but GCC now has the
ability to give us 64-bit processes so
we're not so sure where we're going to
go with the PowerPC compiler from IBM
but anyway our expectations are based on
the horsepower we've seen and the
throughput we've seen from the g5 we
should be able to get about three
thousand transactions per minute
per CPU from the g5 for a total of at
least sixty thousand transactions
amended on a dual system now that's not
too bad that's about what we've seen on
other platforms but the interesting
attribute of this test that we learned
was that we are completely one hundred
percent cpu-bound meaning that we're not
waiting on Io from the network io from
the disk system we're completely CPU
bound which means that if this Apple for
example had a three gigahertz CPU we
should see a proportional increase you
know a fifty percent increase in
throughput from when the testing we've
done the x-rayed units are more than
capable of handling the kinds of load
that we've been throwing at it in fact I
don't see us I mean it would take more
horsepower before we start to just see
the i/o system being a limit okay so one
of the attributes that we've found to be
important to our customers is our sequel
server compatibility we're finding
increasingly that customers are somewhat
unhappy with Microsoft for a variety of
reasons there's an interest in UNIX
based servers as opposed to
windows-based servers and there's an
interest in moving from those servers to
a SE and because of our common heritage
that's relatively easy so as I mentioned
the server from Microsoft with license
from SIA days and we have a lot in
common still even though about late in
about nineteen ninety-eight or 97 the
two companies parted ways we still have
a lot in common so there's a high degree
of compatibility there are however some
incompatibilities because we have
evolved our product in a certain
direction and Microsoft has evolved its
product in another but there's a high
degree of compatibility still between
the two so what we've done is we've
we've adopted a strategy to minimize or
reduce the incompatibilities over time
it's not going to happen all at once
for example with 12 503 we introduced a
number of new built-in functions and
some global variables to enhance the
compatibility of our server with
Microsoft with 12 51 we've added some
additional syntax to our language
specifically for purposes of
compatibility with sequel server derived
tables bracketed identifiers a sort of a
unique sequel server construct it allows
you to create an identifier for example
a table name or a column name with
spaces or or maybe it's a reserved
sequel keyword very easy to use that as
a column name or table name simply by
surrounding it with brackets and so on
so we've added a number of constructs
within the language for this purpose
next year we'll have a new release
version 15 where we land some additional
more significant issues for example
large identifiers identifier lengths
today are 30 bites with sequel server
it's 128 characters or so we're adding
up to 255 by identifier support
scrollable cursors forward backward
relative absolute and so forth
additional data types that are found in
sequel server today but not in ASC
computed columns columns whose value is
determined by values and other columns
and the Select top so one of the one of
the examples I'd like to talk about
briefly is our essay our partnership
with s AP business one there moved their
application to to a SE and some of the
issues that they encountered in the
process of doing that I'd like to just
review fairly quickly they ran into some
data type issues in particular n text
and text is a Unicode text data type and
there's no equivalent in sybase however
we can configure that they ended up
using text as a data type with a default
character set of utf-8 which gave them
equivalent capabilities select top was
used extensively by business one and we
hadn't we did not support that syntax so
it was painful for them to work around
that issue they used active directory
for user authentication and that wasn't
available at least at the time it is now
odbc driver performance we have a
partnership with another with a third
party to provide odbc drivers we had
some performance issues so as a result
of this issue we decided to just roll
our own or DBC driver and that's what's
happening now for windows linux and OS
10 that driver is available now in beta
on mac OS 10 if you're interested so
they had they also ran into some query
limits and server limits they're just
different on on a SE and we had to
address those and also some
administrative tasks that were different
between the two servers and most of the
issues had to do with administrative
tasks our administration tasks or
different the syntax is different the
behaviors different and so on so those
are some of the issues that the business
one had to address and the interesting
thing though is that we now have a
partner team in engineering focused
specifically on addressing issues
related to porting an application
removing it from another database minh
avenger tu ase and if you have an
application that is interested in doing
that we'd be happy to help you out as we
have done with essay p and business one
tools of course make it easy we can
migrate schema and data from and just
about any server to a SE and later
you'll hear from Bob how server I can to
make this happen as well the client once
you've moved the server the data in the
schema you also have to concern yourself
with the client we provide odbc drivers
Oh Lady be on Windows a do net on
Windows jdbc on any platform it's a pure
or type for driver and visual basic if
your application is in visual basic we
you can you can use real basic we
provide a plug-in for the real basic
software that uses a SE and of course DB
library one of the things that besides
the server microsoft also inherited DB
library and they had again have evolved
their their API into in a different
direction slightly so there's a lot of
commonality but there are some
differences especially in terms of data
types so our goal is to make it as easy
as possible for you to migrate from
sequel server to a SE and we're doing
that by enhancing our server to be as
compatible as possible now it's a moving
target expect that the yukon release of
sequel server will have additional
syntax but we're keeping track of that
and as issues come up that are raised by
you and our partners we expect to
address them within our server I'd also
like to mention some other tools and
solutions available today using the Mac
ase and OS 10 the web objects product
has a adapter for it that is designed by
Apple and supports ase it's in on the CD
when you when you buy web objects
realbasic have already mentioned another
one I'd like to point out is in a
outstanding product if you want to build
a Coco application using a very slick
user interface using you know advanced
objects for database session and result
set management and so forth take a look
at runtimes labs if it's all written in
C C++ objective-c it's a dynamite tool &
M framework also you'll be hearing more
later in this session from server who
has a very very nice tool for Java
implementations and Bob will be talking
about that later power easy is an
interesting solution they have an ERP
application for small to medium sized
businesses it's an outstanding package
and it's a complete solution
and so forth so replication server elect
to talk briefly about this what do we
mean by replication replication is
essentially copying database changes
from one server to another and when we
say copying database changes we don't
mean any changes we mean on a
transaction basis so that all of the
changes associated one committed
transaction can be propagated atomically
to another server so if the transaction
is rolled back in the middle changes
that might have occurred are not
propagated it's a it's a store and
forward model we have a replication
server which captures changes from our
transaction log puts them in a queue
when that transaction is committed those
changes then are propagated to
subscribers so it's a publish and
subscribe paradigm and so subscribers
might be other versions of a se there
might be other databases like Oracle or
db2 or Microsoft and the models that our
customers have used tend to fall into
four categories there's a model of data
distribution where you can distribute
data from one site to n number of other
sites consolidation meaning any number
of sites can replicate their changes to
one central site synchronization is
essentially a bi-directional replication
and then disaster recovery or what we
might call a warm standby where you can
replicate all with changes in a database
to a warm standby server so that if your
site is suddenly taken out you have a
backup or a warm standby we'll talk more
about that the distribution model is
relatively straightforward conceptually
you have changes going on at a central
side of this example San Francisco that
need to be propagated to any other any
of a number of other sites including New
York Dallas or another San Francisco
installation so that's the data
distribution model and
it's a publish-subscribe all of these
three servers have to subscribe to those
changes and the ASE then has to publish
those changes so there is some setup we
have tools for that making it relatively
easy to do that data consolidation is
very similar and accept its you're just
going the other way changes from any
remote site can be propagated and
consolidated into a single central site
synchronization is basically a
bi-directional replication nothing more
complicated than that if there's
conflicts in other words keys are
duplicated in either direction then
they're placed in a rep server queue and
you have to resolve them manually and
then there's the warm standby or or
disaster recovery scenario so you have
some number of clients interacting and
transacting with a primary server
changes are replicated in mass in other
words the entire database is is
replicated to a standby server the
changes are placed in a queue and
forwarded as when when possible now the
primary may crash or you know you may
have a power outage something may happen
on your primary server we are client
api's allow automatic client failover to
an alternate server and our replication
server can be notified to reverse the
direction of replication so when this
happens the cues from the primary are
two through the standby are drained and
then all the changes in the standby then
are propagated back to the rep server
who is places them in a queue until the
primary becomes available and a lot of
our customers use this for example we
had a number of customers in the world
trade center and on 911 the entire site
was taken out and fortunately some of
them had backup sites in either Chicago
or London or
there were no Geographic there are no
Geographic limitations on where you can
put your standby it's entirely up to
your network and how much bandwidth you
want to pay for in a wide area network
so they were able to recover and get
back online very very quickly
architectural a the model is fairly
simple a server that wishes to publish
changes has to have a replication agent
in within a SC that agent is simply a
background thread one for each database
that reads the transaction log and then
forwards changes to the replication
server replication server then knows who
the subscribers are and then propagates
those changes to subscribers the agents
we provide our course agents for our own
sybase servers but we also provide
agents for oracle DB to microsoft and
formics and a is 400 so these are log
based agents the read transaction log
and most please log based Asians read
the transaction log forward data to
replication server with entries that
like it came from a SC and propagates to
any subscriber in the subscriber need
not be a sybase server it can be Oracle
it can be db2 or Microsoft and so forth
so one example of topologies that some
of our customers have employed some of
our largest customers for example
Goldman Sachs which is one of our
largest users of the rep server has
sites all over the world and they have
consolidation distribution
synchronization issues as well as warm
standby issues so they have some fairly
complex topologies and all of those are
supported very nicely by a rep server
with version 12 dot six which is
announced in April available now and 10
dot three of us 10 we also we now
support multi-site availability so you
can essentially establish channels one
rep server might propagate to other
versions other subscribers which are
ases but you might also forward to
another rep serve in other words a rep
server might be a subscriber and it can
forward changes to other ASI so you can
fan out the changes and
obtained near infinite scalability in
this fashion we've also simplified the
interface it's a graphical tool built in
Java so that's also available and
running well on the mac and the
installation uses an install shield so
it's available today and it's designed
primarily for downtime to manage
downtime and to distribute data I would
say more than half of our customers use
rep server for managing down time either
planned or unplanned and of course many
use it for distributing data around a
network but the majority now are using
it for managing downtime by providing
you know standby servers okay and now
I'd like to introduce Eric leister who's
going to talk about an implementation at
the Apple manufacturing facilities and
he has an interesting application using
web objects so what I'm going to start
with first is sort of what our business
challenge was what we were trying to
accomplish basically Apple neat
enterprise system that would basically
allow us to report and consolidate
product performance information
basically yield information from the
factories and be able to provide that
data and the status of how things are
running from multiple sites at the same
time before that was being done with
emails and with spreadsheets and things
like that so you can imagine to take a
long time to consolidate that
information together it also needed to
provide data that was accurate real-time
in actionable again being able to look
at the data and do something with it
rather than consolidate data so we're
trying to trying to put all this stuff
together provide real time data for
analysis during our engineering bills
which would be for new products you know
too quick or get data rather than having
to wait a day to get data would be there
immediately again coming back to high
availability of the data and then also
more effectively manage the
new builds so that you know bring
products to market quicker the
capability also needed to be there to
integrate with our test environments so
we actually capture detailed test
information about the units that are
being built so that we can more quickly
root cause issues so we had a set of
database requirements that we we wanted
to have in the database so one was
support of mission critical system be
able to handle high transactions we have
lots of volumes going through the
factory so we need to make sure that it
can handle those volumes high
performance and be very scalable so as
the factory create more product we need
to be able to handle that volume also be
able to have a warm standby so if
something goes down hardware issue
drives will be able to cut over to a
backup system also have a web objects
adapter and run onto us 10 application
requirements we wanted something that
was database independent so basically
whatever development we did there could
be run on run basically with any type of
database provide rapid development tools
basically we're being asked to customize
reports add new fields things like that
all the time so we need to make sure
that we can go in and quickly do that
easily maintainable deployment basically
being able to deploy the applications
quickly the fact that we could put them
on the web means we can update them
quickly as well they need to be scalable
if we need to add more instances or more
servers we need to be able to deploy
those quickly and also run in those ten
server so from a architecture we
basically put all this on one server for
our deployment to the remote site so we
start with mac OS 10 server we add
sybase ase then we put the web objects
layer on top of that our own data
frameworks or data collection frameworks
go there that's where all our business
logic reusable components things are at
then we have demons and reporting admin
and shop for client applications for
collecting data directly on the lines
again all this is running on one server
our server hardware configuration
consists of we're running g4
dual-processor 13 threes with 2 gigs of
ram OS 10 server 10 33 sybase ase 1251
webobjects 523 and apache with just the
standard web objects deployment through
java java monitor from an application
design we actually have four core
frameworks that have all of our business
logic yo models reasonable components
and that all of our applications use and
those applications we have about 29 of
them again broken up into the different
applications or demons reporting admin
data collection and then we have the
demons or communications with the test
environment to collect that data we
talked about earlier data monitoring
feed file processing being able to read
in feeds from the business systems or
create feeds to send back to a business
system and then also be able to export
and import data so our database schema
we have about ninety five tables one
table has slightly over 9 million
records and continually growing every
day we have five tables with 125 million
records each and 10 tables with over a
hundred thousand records each everything
all the other tables are less than a
hundred thousand records we also use
warm standby basically replication
server with warm standby for primary and
secondary databases and we use table
replication to the remote sites which we
go through here next so basically we
have three ways we migrate data between
the databases we have warm stand by the
replicates between the primary and the
secondary database we happen to also use
the secondary database as a reporting
database that's one the main reasons
we've broken out the applications
between reporting clients and admin app
so that we can have the reporting apps
to actually target our backup database
so it's not actually the transactions
are big selects for like a year's worth
of data do not affect inserts and
updates into the other database the
primary we also have table replication
for about 31 tables that
pump things like usernames passwords
other information that needs to be
shared across the sites those are all
going through table replications and
then we have an export import routine
that we use basically use the EO model
and we dump the data into a flat files
and those get ftp back and forth between
the sites just to keep the data
warehouse and sake so we decided to do
some performance testing to see just how
when we were trying to evaluate
databases how what kind of performance
we could get so what we did was we
started with the dual g4 one gigahertz
with a gig of ram we had a raid box with
raid 5 and it was tense / 10 3 3 and 12
5 1 of sybase and what we did was we
simulated 10 lines in a factory running
as fast as they can to sort of put that
in perspective you can imagine that a
line the way we measure the performance
of a line is by cycle time so if you
have a 30 second cycle time on a line
that basically gives you two units or
two widgets off of a line per minute so
with 10 lines you'd have 20 units a
minute coming off all 10 lines and what
we were able to do in our simulation was
generate 1,800 unique serial numbers per
minute which is basically almost 20
times or 10 times more than what the ten
lines were would normally do in under a
30 second cycle time so if we if we take
that transaction load we use SP system
on which is a stored procedure to give
you sort of transactional information we
found that we're getting about 466
transit transactions per second or 28
thousand transactions per minute or one
point six eight million transactions per
hour and of those they're the
percentages seventy percent of those are
inserts so there's actually a lot of i/o
going on there fifteen percent updates
eight percent selects and seven percent
deletes after rap after we did some more
research we found that actually that
eight percent selects is actually much
lower than what the numbers select they
were truly going on there the eighth
person selects or more of updates that
basically end up being selects and not
actually updating records in the
database so we figure that it's probably
closer to two million transaction
view counted selects if we were able to
get that in there and then also we have
the two engines going so we're about 75%
utilized on CPU so we think we added
another excerpt that was running our
simulations we could actually get even
more performance out of the system so
now I'm going to sort of go through a
demo of the web application that we put
together that'll show how we report the
information it's just one of the many
pieces of this whole system and it's all
sort of simulated data so it's not it's
not real data in there but this is sort
of the login screen we do you the user
authentication through the system as
well so we can go in username password
the first screen you come into it gives
basically management the ability to go
in choose what sites they're interested
in and then see what products were
actually built at that site so you can
imagine these will be there just generic
names here but as you can see you see
what the current calendar weeks yields
are what the previous weeks were so you
can see if there are any trends going on
and in this case you can see there's
sixty-six percent last week versus this
week which was eighty percent so we
could drill into that and see what's
going on then we see that there are the
two sites that we were looking at and
you can see there's 64% q mewled
foresight to in seventy-two percent
foresight 66 so you then drill into site
too and then we're actually have a
graphing that shows what the volumes
were for that week how the trending
yields are going what the yield points
these are just generic yield points here
so you can see what's going on top 5
failures you scroll further down into
the graph you'll actually see more
detailed information what happened on
each day individually and then again you
can root you can root cause down to see
okay there's a spike down there at eight
percent on the ninth we go into that and
then we can actually see the list of
serial numbers what the failures were
reworked actions again this is just sort
of touching the surface you could drill
down further into the serial number and
see details about the failures how many
times it has been through certain tests
all that
so that's it okay I'd like to introduce
dr. keith campbell from an OV on who
will be talking about his application
and what he's doing with very
interesting technology all right thank
you want to start by just trying to
motivate why Nov on is trying to do what
we're trying to do so you lose a lot
when you lose your sight prevent
diabetic blindness diabetic blindness is
the number one cause of preventable
blindness to the United States today and
the sad part of it is is that ninety
percent of the patients that go blind it
could have been prevented if they'd had
treatment in time and diabetes is a
growing problem in the country that
there's more and more people being
diagnosed with diabetes all the time
they're talking about you know the
epidemic of obesity and of diabetes
subsequently in our population and even
though there's treatment that's ninety
percent effective only forty percent of
patients actually get that treatment
done excuse me that test done every year
and that's really a tragedy because a
lot of people are going blind so this is
the problem we're trying to solve Anna
novian is trying to solve this through
technology this is a diagram
illustrating our architecture and what
we're trying to do is to put the camera
in the primary care physicians office
that takes pictures of the eye and then
sends those pictures over the Internet
to a data center and subsequently to a
reading center where they can be
evaluated now some of the reasons why
we're trying to do this is that again
it's this problem of only forty percent
of patients are being evaluated today
these evaluations are being
by ophthalmologist you have to see your
primary care doctor and then once you
see primary care doc to you then get
referred to see your ophthalmologist you
may not follow through and see your
ophthalmologist your ophthalmologist may
not have time to see you within the
timeframe that you're wanting and
there's several problems and even again
your primary care doctor may even forget
to refer you to the ophthalmologist or
even if he did refer you you may forget
to make the appointment so clearly
there's a systems problem here that
we're trying to solve it turns out that
ninety six percent of patients every
year do see their primary care physician
so if we can do a photograph of their I
at the time they see their primary care
physician we can try and prevent
blindness you know basically taking that
forty percent compliance rate turning it
up to be a ninety-five percent
compliance rate and trying to take
diabetes is the number one cause of
blindness in the u.s. to something
somewhat less so that we can work on
other other social issues there's a
number of interesting logistical
problems that come when you try and do
this and one thing is is that having a
photography application that runs here
in the primary care physicians office is
not exactly a thin client okay this is
not something that you can run through a
web browser we have a very fat client
that carries that captures actually
stereoscopic images of the eyes we get a
50 megabytes data set that we have to
collect and send over the internet to
the reading center where they use
crystal eyes which is a
three-dimensional LCD shutter technology
for actually visualizing the retina in
stereo and we have to manage us in a
very robust way and when I joined a
novia on about four years ago they had a
two-tier client-server architecture
where the photography application was a
mac OS 9 application and Mac no s nine
not having some of the scalable server
characteristics we would have liked we
had a data center that was basically
running Windows NT and it was a
client-server architecture that was
always breaking because we're doing edge
computing out here you know we're
putting heavy-duty equipment out in the
field in a site where we don't have a
dedicated technical support
f where the internet connection is
usually consumer grade and consumer
grade is you know that we actually had
one case where in our evaluation center
we had we thought we were doing well by
having redundant data connections going
in so we had the cable coming in and as
well as a DSL line coming in but turned
out there was a car that hit a power
pole in front of the evaluation center
and it took both of them out and we
still had to be able to operate in that
environment so we felt we had to go away
from a two-tier client-server
architecture and go towards more of an
agent-based workflow management system
where the agents would take the data
with them from location to location and
be transport layer independent so we no
longer depended on the network although
when the networks are up that's great
but if the networks are down we can
serialize the data to disk pop them in
and use Federal Express as our transport
layer and be able to still work out in
the out in the edge with these
heavy-duty clients and provide this
service in the doctor's office on a
daily basis so this isn't just a little
bit of an illustration of how our
service works that we will photograph
patients at a patient center and again
this is a primary care doctors office if
the networks are up we use Java RMI to
take those agents they move them through
different queues within the patient
center there might be a registration q a
photography q a dilation q and Inter
ocular pressure hue and then once all of
that data is collected they then go
somehow to the data center again that
could be Federal Express or it can be
tcp/ip if the network is up go through
the archival q the something to publish
to the web site and other things and
then move on to the evaluation center
where they may be read by one reader and
if there's something that's found they
may be referred to be read by over read
by an ophthalmologist or by a senior
reader or other things and then that
information then can go back to a data
center where it's archived
so for our diabetic retinopathy
evaluations what does sybase provide us
well one of the things is is that when
we were looking at re architecting os10
had come out and one of the questions
was raised was well can we use the same
platform in to end and at the time we
had very limited options and actually
went when we heard the announcement that
sybase was going to come out on OS 10
that really changed the way we started
thinking about the infrastructure which
we would deploy it because we were able
to have an enterprise-class database
that would provide the reliability that
we needed and allow us to focus on some
of the other issues of reliability in
the computing out in the edge so it's
something that has basically allowed us
to work on our business enterprise
architecture without having to worry
about the database which was one of the
things that we had had to worry about in
the past so diet excuse me side base
provides us with these warm copies of
data so that we can have quick
deployment of standby reporting copies
and backup copies for disaster recovery
we use a hierarchical storage management
system for storing these 50 megabyte
digital files and we have pointers from
the database to the hierarchical storage
management system for doing that what
sybase provides us again is this
enterprise class scalability and
reliability support for redundant data
centers confidence in the data integrity
and reliability and also as in a very
cost-effective and yet proven reliable
solution and finally I'd like to
introduce a bob music from sir voy he's
a managing director of servo USA will
talk about the servo are to deployment
thank you guys for brave the last
presenter of the last session woohoo ok
I have about 200 slides now i have three
slides and then a real demo alright so
hang in there we're almost there alright
i'm going to talk about for voy artists
or boy r 2 is
is a development and deployment
environment for GUI applications that
can be connected to any sequel database
that you want or more than one
simultaneously and we can actually see
it so this slide is kind of muted is
it's very easy to use zero deployment
client it's based on java that will run
anywhere and you'll see it i just wanted
to talk to you about some of the things
that you can do with the boy and in
particular i wanted to talk to you about
a really big win that we got with
stanford using the sybase ase and sybase
a sa as well as serve oi so here was the
challenge me to integrate a whole bunch
of stuff and they have legacy things
inside of the university that you
wouldn't believe everybody has their own
favorite database could be informants
could be sybase could be a as they could
be oracle could be anything and they
need to put it all together so so the
way that they were doing it is they had
knowledge workers who would get data
dumps and exports and spreadsheets and
filemaker and 4d and some access stuff
and they would just kind of mish-mosh it
all together and then re output it as
spreadsheets and data dumps and put it
back in so it was very very cumbersome
took a lot of time very error-prone very
human intensive so what we helped them
to do is we help them to codify their
process by using stored procedures
inside of ASC to get a standardized way
to do everything and we used savoy to
build the user interface and because sir
voi is so easy to use these these
non-technical people these these people
who are using end-user databases like
access or filemaker pro can easily
create these user interfaces with no
programming all right so we're going to
actually do that right now i'm going to
build for you here a customer invoice
invoice detail complete solution and and
i'm going to deploy it in less than five
minutes
ready here we go all right so we're
going to do a new solution and i'm going
to call it WWDC and here i have a list
of named servers these servers can live
anywhere they can be on land when they
can be anything that you want I have one
here that is a sybase database called
example data and down here this shows me
all of the tables that are inside of
that named connection database so I'm
going to choose to create to tail two
forms one based on on customers and one
based on orders actually all right
customers okay so two tables so i click
ok now this will go and actually
interrogate the database so these are
all the columns inside of the customer
table so let's pick some here i'm going
to say ok and these are the ones inside
of the orders let's pick some columns
here ok now i'm going to come out of
this data design mode into the browse
mode and i now have an interface into my
sequel database i can i can insert
update and delete just by simply pulling
it down and say new record to delete
record and the default mode of service
auto come in so i can literally come in
here backspace over this character as
soon as i leave this field is
automatically committed and written to
the backend database alright so that's
pretty cool so I have my customers and
that works the same way there's two
different tables just come here and I
can go ahead and use my keyboard if I
want two and make these bigger or
smaller by the keyboard alright so
that's all fine and good but now let's
go ahead and do something interesting so
I have the I have the order header but
now now I want to see all the details
just one quick question though first how
much sequel did we write so far ok
nothing that's the fun part right that's
that's where the knowledge workers can
get in here and create these forms in
these applications they need to get the
data
and not have to call up I tea and go oh
by the way you got that thing done yet
all right so we want to now join things
together all right how many of you want
to really tutor these knowledge workers
and how to do sequel joins anybody no
probably not ok so I'll show you the
Savoy way of doing things so we're going
to go back into our data designer mode
here and we're going to come up and we
have a thing called a relation that make
sense so I can either go ahead and hit
the create button and it will read the
constraints that are on the back end
table for me or I can create a new
relation which is what i'm going to do
here's where things get really
interesting in savoy because this is our
lists of named connections all right so
we're going to go from our example data
customers to our orders and I'm going to
choose the primary key it shows me all
the columns I'm going to I'm going to
choose customer ID to customer ID and in
this case it's going to be an equity or
an equal join but I can do greater than
less than not equal to and I can have
multiple predicates multiple key fields
like a new test getting delete option
link just by clicking a check box so i
click ok now i'm also going to do
another one I'm going to go from order
to order details here's where it gets
interesting right because their name
connections so this one on the left hand
side could be my sequel and on the right
hand side I could join your Oracle
database or a sybase ase database or
sybase a SI 2 to my sequel any
combination that you want to so you can
actually join data across multiple
vendors databases and have it all on one
screen so we go from ordered order
details based on order ID to order ID
all right click ok now I'm going to
click and drag move these objects and
now we want to show all of the order
details for this order below it so we
have a structure that's called a portal
so this is what this object is and it
shows me these are the valid relations
it knows that I'm on the orders table it
knows there's only one valid relation so
it only shows me one valid one let's
pick these columns click OK
and now I have this portal structure I
can make it bigger so now when I come
out of my design mode I now have all the
children records ok 44 those orders ok
now great now let's do something else
let's go ahead now and say that what we
wanted to do is we want to take all of
our orders and we want to come to our
customer page here and we want to see
all of the orders and the order details
for this customer ok anyone want to
write that sequel statement I don't
probably not me neither all right here's
a survey way of doing it we put a tab
panel object on there that says okay
here's customer two orders and there's a
form called orders that's based on that
join so that's all we need to do we say
yes use that form here's my chat panel
will pull it out here come out of my
data mode now I have this customer has
66 invoices this one has 14 this has
eight this has 19 this is 14 I can go
through here and i can edit this data
even though it's three tables away and
how much sequel did i write 0 all right
I don't have time to show you all of the
really nice user interface things that
you can do this is obviously a really
simple example but you can make it look
really pretty now now let's deploy it
okay so now we've built it it's
beautiful this is our app now i need to
deploy it worldwide okay oh this with
some other project like desktop
databases right it's a sneakernet
install or a network install or a
ghosted image install with Sir voy it's
point a web browser to your servo
application server and push the button
we use java web start technology so what
happens is as soon as I click everything
happens completely outside of the
browser independently of the browser
using java web start
I say start now this java web start
application is self-healing it will
check with the server every time its
launch and if there's a new version it
will self update and continue the launch
so here's my two solutions there's a CRM
and the one that we created called WWDC
and here on my client is WWDC just like
it was everything works just the same I
can go to my customers form here's the
embedded customers form and it works
just the same as it did in sight of the
developer this can be deployed over the
land is going to be deployed over the
when I built it in less than five
minutes and deployed it now Bob how
about things like network latency there
is a slice that you can visit and this
is going to be demo Savoy calm colon
8080 now if this looks familiar this is
a server that's running in Amsterdam
notice it does the same thing it goes
and gets a client
and you can run this from your own
network if you want to just to test the
network latency is it fast over network
coming coming coming here it is alright
and we're going to go ahead and load up
a solution and it's opening up right now
so once the solution loves it will run
as fast as it does on the network here's
the here's a CRM demo this is the demo
that we ship with as a sample
application and we'll go here to the
company's forum will click on this
detail and and as you can see all the
controls are native aqua okay if you
were to deploy this on a Windows machine
all of the controls would be native
windows if you deployed it on a on a
linux machine all the controls we native
linux if you did it on a solaris machine
all the controls we native solaris okay
no coding no changes ready to go this is
data coming from Holland this is live
data so let me just scroll through this
data one at a time for you and you can
see that tab panels and all it goes very
quickly even over a network this far
away all right and one more thing in
closing because I know we want to get to
some QA I want to give you a chance to
try out sir voy on your own so after
this speech if you would like to give me
your business card I will send you a
copy of the reg code for no charge 649
value i'll give it to you for a business
card here is a sample app that comes
with it but what's really neat about
this is that you can use things like
built-in javabeans so here here are some
letting a built-in javabeans that are
doing this every time that i click this
it's actually doing a sequel query to
get me this native HTML column passing
those values to a java being that then
is doing the charting and it
doing that twice two different charts
two different queries two different
passes and it's drawing this in real
time as the records are changed that's
about all I got so I stick to my time
frame thank you very much do see me
afterwards and I will be happy to get
you a copy thank you thank you okay I
think right now what I'd like to do is
just point out that we are committed to
this platform we're going to be
continuing to enhance and or our of the
value the value that we add to this is
is a rock-solid industrial-strength
database technology on a very very high
performing and low-cost platform you can
get more information for myself Mike
azevedo at sybase or Darrell Salas you
can contact these any of us at any time
we have a version of the adaptive server
enterprise available on our website at
sybase calm / Mac it's a free download
of our developers edition for the back
and it's I think the current version
that's out there is 12 52 I may be wrong
it might be 12 51 all of our
documentation is available online you
can reference HTML of the HTML pages if
you have any question about a the syntax
of a command or or some topic related to
AFC you can you can get that