WWDC2004 Session 638

Transcript

Kind: captions
Language: en
please welcome our first presenter of
the morning director hardware storage
Alex Grossman good morning everyone
thank you very much for getting up early
on a Friday morning after a really great
night last night I appreciate everybody
being able to do that we're going to
start this morning we've got a lot to
cover and hopefully it's going to be a
very useful everyone I've got a pretty
full agenda and I've got some great
guest speakers to talk to us about some
real-world stuff that we're going to
talk about but before we do that I want
to go through some calisthenics in the
morning so we're going to use where
users are right hand or left hand a
little bit so to get an idea just just
to help us give me an idea of how many
people are familiar with the terms that
i'm just going to relay here first one
is dad's direct attached storage ok how
many people are deploying direct
attached storage today ok not that many
how many people are familiar with the
term navs or a network attached storage
and how many people are deploying
network attached storage a lot of people
and how about the last one is san
storage area network and how many people
are deploying those ok that's great so
for the other hand the other the other
and we got to use the other hand for
this one how many people are familiar
with fibre channel networking ok great
and how about I scuzzy not too many ok
great fantastic so let's get started
let's talk a little bit about the agenda
what we're going to cover first thing I
want to do is uh oh last question is how
many people today have X Server AIDS
cool ok so the first thing imma do is
give you a little quick refresher on the
xserve raid talk about what's unique
about it and you know we're seeing a
year or so ago you know Apple is
probably the only one out there in a
tier 1 storage vendor with an 80 a base
trade system we're seeing a lot more ACA
out there but not all of them are
created equal and we think we have
something unique in the xserve raid then
we're going to talk just a little bit
about raid basics and where we see that
whole market going on what's going on
there and then a lot about storage
planning and what it's really like to
plan a deployment in storage and then we
have we have one of the biggest areas
that we get questions in and this is
constant from people is they really
understand what do I do in fibre channel
infrastructure what
it really look like what should it look
like what's the future look like you
know the costs have been really high and
we want to take a stab at what what it's
really going to look like so we're
really happy to have qlogic San
architect with us Ryan clients we're
going to we're going to have him come up
and talk about that and then we have a
customer deployment a real life example
of where we actually had we had apple
and some partners get together and do a
great deployment for one of our
customers then we're going to wrap it up
a little bit and talk a little bit about
best practices and some q and eggs sound
like fun good so let's get started first
thing is let's talk about extra of raid
and and what it really is so extra of
raid is really a storage building block
so it's a high availability design
system it's a 3u enclosure basically it
can scale up to three and a half
terabytes and it can scale up to nearly
400 megabytes per second sustain
throughput so that is really and read
and write performance very high and very
very competitive with systems that cost
a lot more the other thing about xserve
raise is that it's extremely versatile
so it can be used in a dad's deployment
and we'll call that our sched we replace
minh strategy it can be used with
combining it with an xserve g5 or an
xserve or even even a powermac and use
it in as type of configuration and it
has capabilities in it that allow it to
be used as stand alone or combined with
other systems to build great span
infrastructure probably the most
important thing about xserve raise is
the high availability design and what I
mean by that is that everything in the
box is swappable so the design really
lends itself from systems that have cost
a lot more money traditionally where the
components are either completely
redundant or they're swappable easily
and so the downtime usually there's none
but if there's ever downtime it's
usually extremely minimal and that comes
from just looking at the front of the
system where you see 14 hard drives that
can easily be unplugged from the system
in the back of the system what you see
is you see a clean design and apple
design from the ground up we didn't
start with existing designs or use
pieces or parts that we're out there we
actually start a clean to design a
system that had high availability built
in and we have redundant power redundant
cooling we had
a hot or hot swappable components
throughout warm swappable raid
controllers we have a passive mid plane
for the data and so what you end up with
if you pull all the components out is
you end up with a metal box with a with
a mid plane or a board in the middle
that that does passive signaling through
it so very easy to change and update the
system as you need to so this is this
one of the great points of the xserve
rape now beyond the high availability
design one of the things that really
sets x server eight apart from a lot of
the other systems out there is the
management we chose to go with a
java-based management tool for extra of
raid and it's something that we've
updated constantly since we introduced
the extra raid in fact what you're what
you're going to see and one of the
things I can tell you about now is
you're going to see an update to the
raid admin utility that's actually
happening next week and I was very
fortunate last year to be able to show
you a significant update this one's a
little smaller but it's a performance
update so we're constantly updating the
performance and scalability of the
xserve raid the beauty of the rated main
utility is it allows you to do
monitoring of one or even hundreds of
systems from a single screen and it also
allows you to manage these systems very
easily and do all the management tasks
remotely and that was mainly driven by
people like me who just are lazy and you
know don't want to get up early in the
morning to find out what's going on your
system they just want to do it from home
and the other thing is we've added more
and more sand capabilities and
high-availability features like lund
masking and mapping to the system and
things like being able to rebuild parody
on the fly and we're doing a lot more of
that when the new release comes out
you're going to see a lot more of those
capabilities just built in so this is
really a tool that is built for every
platform yet it looks stunning on the
mac platform it's really a phenomenal
tool the other thing about xserve raid
and this this was a request that we had
here last year from just about everybody
who attended was that we we had people
deploying xserve raid on a number of
platforms beyond the mac but we hadn't
gone through the actual certification
and compatibility yet it was it was kind
of funny that our customers really drove
us there in fact there was a website
that went up last year and it's still
really active is called alien raid org
and it's really exciting an alien raids
org or that they were really the first
people it was a group of people who
actually said did you know that the
xserver aid works on Windows and that it
works on solaris and it works on AIX and
they really one for the step by step
step by step features of actually
showing you how you would install it
most of those were plug the cables in
turn it on so it was really really
pretty simple overall but what we did
over the last year is we asked our
customers who were the infrastructure
partners that they really wanted on X or
raid and we're open if you if you have
suggestions as to who else you'd like to
see that we're very open to do that and
we chose what we felt were the best of
the best and those are people like
qlogic and veritas and emulex and
brocade and candara and people that you
really really care about and then also
the traditional Apple Apple vendors like
like a doe technology where we really
felt they would add a lot to the
platform but of course we had a look at
popular operating systems as well now I
think everybody in here realizes that we
work very well on Mac OS 10 but we're
also certified on redhat and that's
their advanced advanced 2.1 and three
also on yellow dog linux which actually
runs on our platform and novell in
novell five novell six novell 65 windows
2000 surprisingly windows 2003 and
windows XP professional so all those
certifications have been done and we're
continuing to to certify xserve radon
more so this way you have guaranteed
compatibility for the system and not
only in mac and all mac installations
but also in heterogeneous installations
where it's simple i just like to get a
show how many people have a
heterogeneous installation the xserve
rate wow that's that's a lot more than
I've seen in a long time so that's great
so let's just go over a quick a quick
raid basic so everybody in here should
be familiar with multiple levels of rain
out I'll tell you these were the these
were the levels that were initially
defined by by Randy cats at Berkeley in
1987 and these are still what I call the
pure raid levels and these these can be
combined with other raid levels in here
and there are some fancy raid levels
that people are looking at today but for
the most part these are the raid
everybody does and what is a raid level
well obviously when you start lowest to
highest and number you actually increase
in in redundancy or availability and
performance if you start with raid 0
it's striping not really a true raid
level but a lot of people have or still
are deploying striping today for speed
because it's one of the easiest ways to
take a number of discs combine them
together and get performance out of them
and then probably the most popular raid
level that's used out there is mirroring
this is basically just taking either one
hard drive or a group a group of hard
drives and mirroring them keeping the
same data set between them so if one
were to fail you have another copy of
the data this is the photocopy way of
doing things it's not very it's not very
efficient it's kind of a wasteful right
if I have a copy of something I make a
photocopy I waste the two pieces of
paper it's the same with with mirroring
if i have if i have one hard drive I
mirror it I used a hundred percent of
the of the second hard drive space for
mirroring and as you move up really you
get to what i would consider more
efficient raid levels and the one that
we focus on a lot and the one we've
optimized for is raid level 5 and the
beauty of raid level five is that it's a
distributed parity scheme and what we
mean by that is we actually create an
algorithm or a piece of data on every
one of the hard drive that is a part of
the data that's on the rest of the hard
drives and essentially what that means
is that if you have a hard drive fail we
can instantly virtually on-the-fly
recreate that data from the remaining
hard drive so you have to lose a large
number of hard drives before you'd
actually have a failure or or lose data
and the problem with raid 5 in the past
has been that the performance has not
been consistent so the read performance
is very similar to that of raid 0 it's
like striping a bunch of drive the
performance is better than a single hard
drive but the right performance
especially in random writes has been has
been slower so people who are doing
things like database or online
transaction processing anybody do that
type of work on their systems so if
you're doing that you knew that raid 5
was a bad way to go in the past and also
if you're doing things like video
anybody doing video here streaming video
if you were streaming video raid 5 was
also terrible people went to things like
raid 3 or in most cases they were just
doing raid 0 striping and we
serveraid we really looked at that and
we put we put our team together and
build some really sophisticated
algorithms and some caching schemes that
really make read five faster in most
cases than any of the other raid levels
including raid 0 so our performance is
really quite good in race 5 and the
protection is really good so let's talk
about storage planning a little bit this
is really the most important part of
deployment and really talking about a
best practices strategy and its really
all common sense but it's things that we
don't think about on a daily basis when
we when we start there we have to talk
about the three different approaches to
storage and some of you may not realize
you're actually deploying a number of
these within your organization the first
one is direct attach so we talked about
direct attached storage what is that and
there's a lot of different different
ways that we look at this the first way
to look at it is a great example is the
xserve g5 it's locally attached hard
drive it's the hard drives that are in
the xserve g5 that become direct
attached and for the most part this is
the way traditional storage was done you
bought a server you put hard drives in
the server and it was a it was a done
deal and when you need it more you need
it more performance the servers didn't
have a lot of performance you added
servers and therefore you out of storage
and that's the way things scale then it
kind of looked like this you had a
network down the bottom all your clients
and obviously I I could draw hundreds of
clients but it's I'm not that good with
drawing and you have Ethernet switches
those are the gray lines and then you
have the two xserve g5 and in this case
you know you could have a terabyte and a
half of storage online assuming you left
it all in either j ba just a bunch of
disks or a raid 0 stripe so it's quite a
bit of storage but the problem is that
it's never enough and so you might take
one of your servers which is a let's
call it a high youth server this could
be your email server for instance and
this could be this could be a xserve g5
a mac OS 10 based server this could be a
windows server or a linux server and
you're going to add some external
storage well that works and even you
might even find that with the xserve
raid because it's a dual ported dual
controller design that you want to share
half of the storage on one of the boxes
and half the storage on the other a
little more efficient you get the
ability to centralize your storage so
that you take advantage of the high
availability of the xserve raid yet you
get to share it over over to servers or
you might find that that's not enough
and you just want to attach more storage
to your individual servers and this is
still direct attached and this is the
way things have been done it's truly a
traditional approach and it works really
easily because today most people have
land-based backup so you're backing up
across across the land now this was a
really good idea when the when the data
sets were small how many people can
remember a couple years ago when your
tire organization ran on a couple
hundred gigabytes right I am I had an
experience and experience a few months
ago in GarageBand first came out and I
was actually on a plane and I went to
install garage ban on my notebook and I
put the CD in and I went to install it
and actually in the DVD and it said that
I didn't have enough room and this is my
notebook and I thought don't have an 80
gig hard drive in here well I had a few
of keynote presentations so it was a
little hard to do but for the most part
backing this amount of data in this case
we would have almost seven terabytes of
data here imagine backing that up across
land anybody go to the backup session
earlier this week so you get an idea if
for those of you didn't go and those of
you who backup terabytes it can really
take about in an uncompressed
environment over 24 hours to back up a
terabyte of storage so you know the
backup windows are shrinking so this
land based backup doesn't seem to work
very well in in this occasion where you
have a lot of storage so when you start
looking at this you go well one of the
problems is I need to share the storage
but I have something like this where I'm
pretty dedicated that one xserve raid
let's call it three and a half terabytes
is dedicated to that server and the
other xserve raid is dedicated to the
other server so my resource sharing is
limited so I better guess right as to
how much storage i really need on each
server let's say one of those servers
needed 6 terabytes and the other one
needed one terabyte in the direct attach
world it's really hard to do right you
just can't what we're called what we'll
call provisioning that storage over
between the two systems so when you look
at this and you look at that traditional
approach
is it still viable in today's world and
a lot of people look at it and they say
sure it's viable because it's a lower
initial investment I don't have to do
much planning I know that I can just buy
one and attach it but there are problems
with scalability and there's problems
with longevity because a lot of times
that storage is internal to your servers
and when it's time to replace the server
Moore's law kicks in it's time to
replace that server you end up not
having storage that's compatible and
that's the beauty of an xserve rate if
you deploy it because its external its
fibre channel it's just going to plug in
but you also find out that you're pretty
limited on that back up in that
restoration so it's not really the ideal
approach so what else do you do what is
what most people do most people have
moved to a network attached storage
model they start out with some type of
direct attached and usually it's many
more than 22 servers are out there and
they take some type of direct attached
nazz right onto the network so it's
attaching to that that gray wire that's
there indicating our network and in my
case I chose to take an xserve g5 and an
xserve raid and use that as an as
replacement and that works it works and
it's it gives you the ability to share
that storage across the network to both
servers so you get some provisioning
because what you can do is you can you
can dedicate you don't have to dedicate
those resources you can leave them open
and it has expandability but the
expandability becomes limited because
what happens at this point is now the
wire that's coming out of the xserve g5
that that single Ethernet becomes the
bottleneck and because your backbone
unless you're building a 10 gigabit
fiber backbone or just 10 gigabit
ethernet backbone you're really pretty
limited in the overall performance is
going to do how many people have a four
gigabit ethernet backbone out there how
about a ten so I still have one hand so
really you're pretty much limited how
many people have gigabit backbone that's
just about everybody out here so imagine
that if each exergy five can saturate
that gigabit backbone how is your
storage being attached going to do that
and this is really what most people do
most people have deployed a
network-attached specialty appliance and
they're just an embedded nass so this is
a very lightweight
as server and the reason they've done
this is there's no client access
licenses just like an xserve g5 but
they're usually inexpensive and usually
what happens is you end up with this and
you end up with a lot of different
little appliances out there and the
problem with that is that while it gives
you a heterogeneous approach and while
it has a lower investment it is a
management nightmare and it's very
difficult to manage that and it's very
difficult to know what's failed and you
find that you start plugging them in all
over and when you have an ethernet
problem it's really a problem it is a
big problem because you lose
accessibility to the storage and whether
we like it or not anybody here ever
crimp in Ethernet cable okay if you've
crimped an Ethernet cable you know that
you usually one out of three you're
going to screw up and usually it fails
like six months later when it's hanging
there and this is part of the problem
with hanging everything on the network
that network was made to deliver small
packets and it wasn't really made to
deliver the performance and the
reliability to a lot of different
servers it is truly a collision based
network but for most people that works
and in fact most people who who start
their tend to move to this the navs
appliances and we know who they are
they're they're generally extremely
expensive in a per gigabyte cost but
they have a lot of features they do
things like snapshotting which means
that you can replicate the data really
quickly and their appliance like though
generally easy deployment they have a
built-in they have a built-in file
system and for the most part these very
high-end appliances have an operating
system and a file system and they also
have the downside of being a single
vendor lock-in once you go with them
you're for the most part using their
management tools and you're having to
buy them again and again and again and
it gets pretty expensive and you know we
all know those people and they're good
things or their companies like network
appliance and EMC they build really
really nice products and they still have
the issue of having that single point of
connectivity to Ethernet now they may
have multiple ethernet ports but
generally you're not going to have
hundreds of Ethernet switches you're
going to have one or two very high-end
switches and what if you don't have a
high-end infrastructure you're really
funneling everything through a gigabit
has anybody here ever seen gigabit
ethernet
perform at gigabit so that's one of the
other issues that we run into all time
so what is the choice the choice is to
build a stand and it sounds like a lot
of people out here have already taken
that step at least so what does the
basics and look like in where they scale
well most people who build a fan start
out again with that direct attached
model they start out there and they add
storage and I threw in a fibre channel
switch in here because I knew I was
building a sand so from a best practices
standpoint I want to start out with
expandability and scalability already in
there and you can take an xserve raid
today and you can use a tool called lund
masking and you can map each address
almost like a mac address but we call it
a fibre channel worldwide port name and
we would map those worldwide port names
very easily in the rated min utility to
each server so what I've done is I
provision storage speech server so the
server's can't see each other storage
but they see the stories they have
really simple implementation and I can
add more and as I add more I don't
degrade my performance because my back
channel network is specialized network
called fibre channel in this case and
fibre channel is a non-blocking network
infrastructure and my performance scales
as my capacity scales and we call that a
sand island and then people will scale
it out so as you scale it out it becomes
more heterogeneous the server's become
max pcs linux hopefully they're all max
but as you scale it out now you can
start to deploy more storage on more
servers and you're not limited and you
can reprovision that storage as you need
to and in some cases the manual process
in other cases there are partners that
can help us provision that storage
instantaneously and without any
interruption of service now this is
really more of what a typical enterprise
San island looks like how many people
deploy something that looks like this
probably knows pretty is a slide so in
this case you have a fair amount of you
have a fair amount of storage you have a
redundant switch architecture and you
have backup that's on you're still on
your land and you're doing incremental
backups and you're really looking at
that this is typically a San Island what
do I mean by an island well a lot of
people say I have deployed a fan
completely in my infrastructure so I
have a complete fan and for the most
part people haven't what they've done is
they've dedicated islands of sand out
there and they've connected them
together which is usually been very
expensive to do and hopefully we're
going to we're going to learn that
that's getting less expensive now that
the next thing that you really look at
is what's called a three tier
implementation and three tier is
something that we're going to talk about
in a minute cold storage tiering which
really gives you the most viability of
the storage you own today in the storage
you're going to buy tomorrow because
hopefully the total overall cost of your
infrastructure is going to be lowered
and actually even though it looks more
complex that's going to take the
complexity out and this is an example
that we see over and over again where
someone has primary storage serving a
number of servers let's face it even in
a single xserve raid you're getting you
know almost three-and-a-half terabytes
of storage it's a lot and you can serve
a lot of servers with that but the
problem is if you're in an environment
where you're turning over a terabyte of
storage a day you're not going to be
able to do delta backups every day so
what do you do you put in a second tier
of storage here now I'm going to talk a
lot about tears but in this in this case
this extra tier is storage uses a
dedicated backup server and a lot of
software and if you tended to backup
session you'll know there's a number of
a number of companies and on the mac OS
10 platform that do this both from a
client and server basis companies like
backbone a tempo and dance and they're
able to actually staged this from disk
to disk and that happens a lot faster
using a dedicated backup server and then
they take that secondary dis to disk
backup schedule it so it happens often
and then they'll actually move it to
tape and so when they move it to tape
the archive they free up their secondary
backup and they're able to actually do
this so this architecture is something
we're seeing more and more and as
storage costs come down this becomes a
really viable alternative and so why
would we want to go to storage area
networking and what are the upsides and
the downsides well the first thing to
know it's a complete storage
communication infrastructure it was
designed for storage it has it has the
performance to handle storage and it has
a scalability so it's highly scalable
it's high performance
and it's also a mission-critical
environment so you can build it as
mission-critical as you want so that
there are no single points of failure in
there or at a low cost you have a fairly
good guarantee that you're not going to
have the problems that you see in other
infrastructures now the other thing
that's nice about it is because of the
speed of the interface it allows you to
do that tiering I just showed you
because you can do sophisticated disk to
disk copies essentially and the other
thing is that today the cost can vary
greatly in stance we're just a couple
years ago fans were dedicated really to
the high end of the enterprise today we
see stands in areas especially in the
video environment where people have four
workers and they're building a storage
area network so it scales from modest to
extremely outrageous and there's one
thing to always be aware of and that is
that interoperability is not guaranteed
so when you're planning the sand you
should plan ahead for for getting
certified vendors and really thinking
about interoperability the other thing
that we hear a lot about is remote
replication remote replication is
something everybody wants to do and
today it's it's usually only used in
large enterprises and there are ways
though that you can do remote
replication in a very easy manner but it
really is highly dependent on
applications now there are a number of
in the extra rate i should say works
with nearly every remote replication
scheme it looks like a scuzzy target out
there but on Mac os10 server there are a
number of different remote replication
schemes databases can can easily be
replicated over IP directory information
can easily be replicated and hopefully
you visited some of those sessions to
learn that and there are appliances out
there that can do remote replication for
you and they're heterogeneous so it's
regardless of the of the type of
infrastructure you have you can actually
see replication done with companies like
falconstor and others where they'll
actually do replication and do it over
long distance or short distance using
fibre channel and IP together and so
we're we're really looking at that in
the future how many people have deployed
remote replication how many people are
thinking about it over the next year so
there's a lot more people looking at it
so we should watch this very carefully
as well
I got the count of the audience of how
many people have direct attached storage
san and nads this is what IDC has told
us across the enterprise and this
includes large enterprises what they
call small/medium business this is the
way it looks and a lot of the dads
that's on here the direct attached is
driven by people in small business who
just have servers with a direct attached
either scuzzy or fibre channel device
but the question is what's that going to
look like over the next couple years and
so they've looked at at it and they've
spent in 2007 there's going to be a
dramatic switch to San architecture and
it's also you're going to see it's going
to pull away from that so it'll be this
integration of nas and san the
convergence word that people are using
all time and you're going to see dads
drop off because it's much more reliable
and much easier to scale and change
servers out when your storage is in the
background and your servers just need to
drive to boot and sometimes I'll even
boot off the sand so there's a lot of
different ways to do this so if you're
going to deploy this what challenges do
you face well we all face those same
challenges you got to provide
cost-effective storage and anybody hear
of an IT budget that's growing this year
okay that's the problem so there's these
other things like compliance that you
have to worry about now so we say you
need to build a smart storage strategy
and that is really way data protection
against cost and really build what's
considered a tiered environment and what
I mean by a tiered environment is that
storage can be deployed in a number of
different ways the first way is mission
critical this is the 24-7 you can never
be down and you have to just deliver
deliver storage this is expensive this
this can really be expensive in these
these systems especially fans that our
mission critical generally cost hundreds
of thousands of dollars and there are
ways to do in a more modest manner is
anybody visit the X fan deployment this
week great so you get an idea that we're
trying to low we're trying to lower the
cost of that and then there's this
business critical environment where most
of the data is actually taking place
today and that our thats things like
email and web where it's more modest but
you can actually do it in the cost of
storage there most people are still
deploying this mission critical costed
storage that can cost upwards of forty
to a hundred dollars a gigabyte and we
we just understand why they're doing
that today and then there's the mirror
line that that second tier stores that I
talked about where you actually do a dis
to dis backup and I won't call it a tape
elimination strategy but it's it's a way
to put off going to tape or not going
the tape is often and it's getting very
inexpensive I me next overrated three
dollars a gigabyte by the time you look
at the maintenance of tape drives the
initial cost you find that you want to
spend more of your time and are in the
in the backup strategy with dis to dis
than you do in the archive and then of
course there's rich media which is a
whole nother world that's that's our
video clientele where they deliver
absolute absolute performance and very
minimal downtime or as I like to say you
know that the show must go on they can't
they can't have downtime and that's
that's something really hard to
architect for so when we look at those
different tiers you see that as they
move up and through foot it throughput
and availability they also generally
move up and cost and there have been
some disruptive technologies and the
first one is X sort of raid because for
the most part with apple and with apples
partners both in hardware and software
today and we're always looking for more
developers to come on and help us with
this you can deploy xserve raid in a
large variety of these of these areas
without really spending the cost and
this is really a smart storage strategy
if we look at that mission-critical
storage environment I picked one an
interesting one here because they said
I'm not going to pick one on Mac so i'm
going to pick one on windows so this is
a windows 2003 advanced server with
microsoft clustering environment in this
case it costs about six dollars a
gigabyte to deliver fully redundant
mission-critical storage and this is
something that today you'd have to spend
an incredible amount of money to deliver
this with other systems and this is
really made possible by extra raid so in
this in this case you've got a lot of
storage to two servers and it's it's
really simple and easy to deploy now
here's your typical three-tier storage
infrastructure the way it usually really
looks it's more than one storage device
it's more than one server in its
heterogeneous and so you do have a
storage pool that's mission-critical a
business critical pool and an airline
pool that's really the way it looks and
today with ex fan you can you can build
this
or at least 1x manager is released I
should say so it all comes down to which
storage approach is best and it really
depends and what if the tens on is who's
managing it so here are some facts and
you know you're all going to feel really
good or really bad about this because
you live in this world so this came from
the Yankee Group this was a recent
survey forty-eight percent of global
2000 companies have a separate group and
IT who manage storage that's huge how
many people here are dedicated to just
managing storage very few but that's
going to change in the future the other
thing that's interesting is that fifty
percent of the managers consider
heterogeneous storage to be a strategic
goal for them so we talked about
interoperability this is important so if
you don't have an all Apple
infrastructure you don't have an all
windows or in all Linux or an all
solaris infrastructure you really need
to be heterogeneous and I think the most
important thing is that fifty two
percent of the people surveyed and I
think they surveyed about a thousand IT
managers they view the reduced
maintenance costs as proof of a return
on investment so when you look at
deploying tiered storage probably in
that high tier of storage that's 42 a
hundred dollars a gigabyte you pay that
same amount in maintenance every year
and that's really where the cost is so
when you look at well I can peer this
and I can reduce my costs both in the
initial cost of the system and in the
maintenance costs that's really what
it's all about and can I deliver those
same services we think so so if you're
doing storage planning there's a few
things to look at it's really well what
do I already have what do I really need
how much is it going to cost and so
let's take a look at some of those the
first one is existing infrastructure
there's two things that people don't
look at here the first one is how old is
the existing infrastructure I hear this
term all the time and I guarantee
everyone in this audience has used it in
storage at least once it's called legacy
they all say I have legacy storage that
I need to connect to my new storage
anybody ever say that yeah we say it a
lot right legacy stores well what does
that mean does anybody here realized
storage where thou I mean that that's
another thing that people don't realize
in the year 2000 I pay two million
dollars for this three
letter acronym storage and I have to I
have to amortize it over the next 10
years because I paid a lot of money for
well storage wears out its rotating
media it's not as bad as tires on a car
but it does wear out so you have to plan
on depreciating that storage over three
to five years and getting it out of
there and when you do are you going to
buy that same monolithic storage you
bought before are you going to look
differently the other thing to really
look at is have you considered a tiered
approach lowering the overall cost of
storage putting that expensive storage
in your mission critical areas and
putting lower-cost storage in the
business critical and the near line it's
something to really consider and then
the other one is true capacity
requirements and since most of us don't
really know what our requirements are
going to be we can take a guess but you
know who would have thought that from
1998 to 2003 that storage would have
been growing from a need standpoint 110
percent per year not a lot of people
would have guessed that and they would
have guessed low so in deploying
something like a storage area network it
allows you to actually grow with that
storage so you get scalability up down
and out every way you can look at so you
can redeploy the storage and reprovision
it so you need to look at today and look
at tomorrow I think the other important
thing is throughput so we talked about
network-attached versus storage area
networks versus direct-attached in the
storage area network world in a direct
attached world the performance is going
to be bottlenecked by the limitation of
you to the server or the storage there
the limitations in the in the network
attached model the throughput or the
storage performance is going to be
limited by the network so you have to
determine what is my application is that
megabytes per second is an iOS per
second and how many clients do actually
have out there so you really have to
look at that the other one is really
availability requirements do I need it
to really be up 24 hours a day 7 days a
week with no downtime or can I have a
reasonable to 24 hour down time it could
be five minutes but let's just assume
it's four o'clock in the morning and
something happens and it's going to be
two hours of downtime is that reasonable
I you know is it business critical does
it need to be archived how often does it
need to be archived can I use in your
line these are all questions you have to
ask and
you have to answer these yourself
because there's really no one who can
tell you what your business model looks
like because they vary so much in fact
you'll find that most of the people
selling very high-end storage will
dictate your business model to you and
that's not necessarily the right way to
go and the other one is really disaster
recovery so there's been a couple things
that have really driven that 11 of
course was 911 and none of us really
wanted that to happen and none of us
wanted to have to bear what happened
afterward and that was rethink our
storage strategy let's get this off site
and there's there's really two ways to
do it one is to deploy remote
replication very expensive generally it
doubles your cost because not only do
you usually have to replicate the
storage you have to replicate servers
and infrastructure and everything and
the other one is an off-site backup
service and you can even carry it off
site small companies you know the CFO
carries the tapes home and in large
companies there there are companies like
Iron Mountain that will come pick up
your tapes and they'll even load them
and unload them if you need to so it's
really a cost driven driven thing and
when you talk about that you really
don't need to talk about compliance
because one of the other things anybody
here that falls under sarbanes-oxley or
know that they do so a lot of larger
companies will and the government is
taking this very seriously and it's
starting to move to Europe and it's
starting to move farther throughout the
world so these type of compliance that
say that you have to find every email
for the last seven years in 24 hours and
deliver it to the Justice Department
that's a pretty huge requirement
especially how many of you can go
through the tapes you have today and
find something from yesterday it's
usually a pretty hard thing to do so it
is good practice to be ready and budget
this is what drives at all right and how
can how can we be smart about the budget
so it's not necessarily the money you
spend today on the infrastructure you
need it's the money you need to spend
tomorrow and you know that old saying
there's a there's never enough time or
money to do it right the first time but
there's always enough time or money to
do it four or five times well I can tell
you that proper planning on this is
really really important and I think
probably the most important thing about
budget is a trusted vendor you need to
have someone that you can trust that can
give you the right advice and that's
looking out for you and if they're just
telling you today that this is
absolutely one of what it's going to
cost and there's no other way of doing
this
I think you need to think different so
what I want to do with that is really
talk bring up Ryan Klein whose ass an
architect and qlogic to talk to you
about basically the one area that we
hear all the time and that is fiber
channel best practices if I'm going to
deploy a network and basically ass and
how do I do it and how do I lower the
cost in it and Ryan's going to tell us
about some great exciting stuff here
thank you Alex my name is Ryan Klein
with qlogic and before we get started
with the presentation just wanted to get
a show a hand of the folks that do go
out and deployed extra raid how many of
them are using infrastructure hbas or
switches from qlogic that's a nice
number of people out there so you
definitely be able to learn a little bit
from a presentation about the sandbox
5200 switch our management software and
some of the HBA is an infrastructure
software that we have but to get started
for those of you that are not familiar
qlogic is an i/o company so yes what's
an i/o company essentially what we do is
we are the plumbing in the sand
infrastructure the daleks talked about
we connect from your servers all the way
through the network to your storage and
we have a pretty broad product line that
consists of specialized asics that fit
right in the server itself fibre channel
I skazhi through the network so you've
talked a lot about sand and that's what
I'm going to talk about moving forward
so we build those those Network those
sand fibre channel switches that do all
the protocol routing as well as we
provide protocol chips that fit in
products like the xserve raid so from a
perspective of looking from a server to
a network to a storage we provide that
IO path from one end to the other so we
want to talk a little bit about qlogic
the fibre channel switch market and the
transitions that we've seen all the way
from the high end data center and the
types of switching infrastructure that
you may have seen in the past and the
transitions that we're seeing in that
market talk about stackable switches
that this is the sandbox 5200 that I
hope that a lot of you are familiar with
and are we going to come more familiar
with as you move forward and start to
deploy some more sane
and really understand the scalability
that you have when building small
network starting as small as 850 Channel
ports growing up the very large networks
such as 64 and 128 ports and in
deploying a very scalable cost-effective
architecture and then we're going to
talk a little bit about sand
interoperability Alex touched on this as
being a very important part of deploying
a storage area network and it's really
key to making sure that all the
componentry that you have works together
isn't supported and their solutions that
you're not going to run any any issues
with so we talked a little bit about all
of these components that qlogic makes
and so how does this come to you what
are the strategies that allow you to
make use of these products well what
we're going to start to see is that we
integrate our products into the sands
that you go deploy so you'll see fibre
channel hbas inside of the servers
taking our fiber channel a6i skazhi
asics putting them on the motherboard
you know most people today have deployed
servers have IP integrated on the
motherboard you probably familiar with
scuzzy on the motherboard same thing is
happening here with fibre channel we
also integrate switches into the
componentry that we have so if you look
at a lot of the bladed environments that
are out there today they've taken the
technology such as the sandbox 5200
which today is deployed in a box product
and they've integrated it right into the
back end of those products so you know
moving forward the storage boxes like
extra raid and various other storage
boxes have the ability to integrate
switching architectures into those boxes
there's a lot of things like that will
be coming out we're simplifying and
lowering costs and what does that really
mean to you so when Alex asked how many
people here are storage administrators
and have dedicated people deploying
storage I only think I saw one person
raised their hand for what does that
really mean well essentially everybody
here has a lot of different
responsibilities and functions within
their IT organization as and they're
developing products you're not
necessarily ass an expert you know you
have storage out there unit you know the
storage area network makes sense
your deployment but you don't not you
don't necessarily want to have to know
every single parameter and all the
detail implementations to configure
these types of things so what we're
doing is we're building intelligent
software that allows us to configure
automatically these environments as well
as provide ease of use to you so that
you don't have to worry about all those
detailed implementations the final thing
that we're really doing here is we're
delivering turnkey sand infrastructures
so what this means is it gives you the
ability to from a single perspective by
all the componentry that you need to
deploy a storage area network so today
you know we need you need servers we
need the interconnects and we need your
storage well what are all the pieces and
parts that you need to go deploy a Sam
if not very familiar with all the
componentry could be somewhat
overwhelming so the idea is to provide a
turnkey solution that allows you to
purchase the storage networking switch
all of the optics and things like that
that are required all of the cabling as
well as the host bus adapters that go
inside the servers and to be able to do
that in a heterogeneous environment so
these types of these types of kits are
allow you to cross Windows Linux netware
Solaris as well as OS 10 to deploy
heterogeneous environments and manage
them from a single location talk a
little about expanded management and
this is what I just mentioned is we have
something called our San server
management suite and this is a device
management tool it's java-based and
really complements the XOR of red gooey
so Alex talked about lund mapping lund
masking the ability to point specific
lungs at specific servers this software
really complements this and what you're
looking at here is a picture of our
brand new just released OS 10 GUI for
the sandbox 5200 switch what this allows
you to do is configure the specific sand
ports switches and all the functionality
in a heterogeneous environment crossing
all the applications that we show at the
top there works out really well and
complements all of the Apple tools and
integer java-based as I mentioned
let's talk a little bit about the switch
market and what you've probably seen in
the past and where we believe switching
to go so here's a basics and this is a 4
switch mesh you see a bunch of xserve
servers at the top extra ray at the
bottom from tape backup as well as some
heterogeneous environments and if you
wanted to go deploy this environment or
something similar to this a few years
ago those are some rough numbers or what
you'd see it would cost the switching
environment really stands out here your
force force switches their cost roughly
twenty thousand dollars apiece roughly
eight going to about 80,000 to the total
so that's a large part of the overall
sand it was cost prohibitive for a lot
of people to put together storage area
networks so what do we see we saw most
sands being deployed its large
enterprise and from a show of hands
earlier this morning most people here
aren't deployed large enterprises
they're more the small and medium
business side so fans are really cost
prohibitive as we move forward one of
the strategies that we're working with
apple on is to be able to bring storage
area networks the functionality the
xserve raid brings you down to the small
and medium business and be able to
develop the platforms it's a sub $15,000
level for the entire solution as well as
still scaling all the way up to the
enterprise so the sandbox 264 switch is
a chassis based switch that allows you
to be at the very high end here as well
as the sandbox 5200 which we're going to
talk about in a few minutes really
allows us to scale all the way from
eight ports through the small to medium
business they are all up to 64 ports so
we look at these various areas if we're
looking left to right on this slide what
we see is that in a pass sands were for
large enterprises and if you wanted to
deploy something in that enterprise you
really only had one or two choices for
deployment you either had a large
director class which something like a
McDade ax or a brocade box and then you
had edge devices that were
sport didn't have a lot of choice and
were really limited in scalability in
functionality and it was very expensive
as well as we move forward we see things
like extra raid being announced really
starting to enable the small and medium
businesses as well people that want to
scale to the higher end and at the same
time you're starting to see products
from qlogic come out there chassis base
switches as well as stackable switches
and embedded switches stackable switches
here are the most disruptive technology
to the sand market that's ever happened
so everybody here is probably familiar
with IP environments where you had
stackable IP switches you needed an
extra switch you scaled you dropped it
on the stack he plugged in the
interconnect and you and you grew that
didn't exist in the fibre channel market
and it made a lot of sense so that's
what we went out and brought to market
and as soon as we did that we started
working with Apple because we realized
that they were had the same industry
strategies they wanted to bring a
scalable sand architecture to the market
and our product strategies fit really
well in moving forward what we're
starting to see is sands for the small
to medium business as we're driving
costs down the model for something like
a sandbox 5200 allows you to start out
as small as eight ports and grow at the
same time the embedded switches start to
come into play here taking those steps
switching technology and putting it
directly into the storage arrays or
directly into bladed environments for
servers and things like that really
reducing cost and complexity so using
the last two slides and compared them to
this one in the next one in the past we
had the chassis based high availability
directors large port counts as well as
we have the eight and 16 port six
switches so of the folks in the audience
that raised their hand regarding having
switch infrastructure how many people
here deploy chassis based or director
class switches I see one or two hands
almost nobody so everybody else here by
show has 8 16 port switches 6 port okay
so
they're your really locked into a
strategy where if you want to scale that
environment you have to take another
switch and connect it in via inner
switch link and start using up those
valuable end-user ports to do that so
where do we see this going stackable
switch market sackville switch market
allows you to scale an environment you
can still continue to leverage the
existing 8 and 16 port switches that you
have but you connect them directly into
the stackable switch and scale that way
even if you need the high availability
high port count switches you start using
the chassis based switches and you see
how they complement each other giving
you a choice to scale from fixed port
environment through stackable all the
way up to the hype or count chassis
switches to really be able to pick the
right switch for the right application
so here's a little bit of a few of the
industry and the major players that are
out there and some of the products they
have the most of you are probably
familiar with the cisco small little
company mick data brocade as well as key
logic and and you see that everybody out
there really is offering fixed port
switches not not giving you too much
choice or scalability qlogic has come
along and really been disruptive and is
offering the sandbox 5200 as well as the
blade switch in the bottom right but one
thing that you'll notice about stackable
switches is they offer all of the
functionality that you would get from a
fixed port switch as well as they're
offering you functionality that you'd
see it's a director class so things like
non-disruptive code load everybody here
uses patches all the time well as we
move forward we provide updated software
and functionality to switches
infrastructure part you want to make
sure that you're up on the latest and
greatest code well you probably don't to
bring down your environment to do that
sandbox 5200 allows you to upgrade
firmware dynamically without affecting
your storage area network other big
things here are management software this
is really important because it allows
you to manage the environment and
doesn't cost additional money to you as
well as all of the features such as
monitoring and performance and things
like that are included in these
environments where you may not get those
in a fixed support violence so we're
going to introduce here the sandbox 5200
to you by a show of hands how many
people here are familiar with the 5200
switch that's great that's that's a
great number of people here so the 5200
it is a says 16 2gig ports and for 10
gig ports so the 16 too big for its
really kind of be viewed as a fixed port
architecture what the 410 gig ports and
you got cut off here on the right-hand
side I'll show us you in a later slide
are used to interconnect those switches
together it's a 1u box it's managed just
like an eyepiece which would be managed
it has an Ethernet port as well as an
rs-232 port it has something that we
have we call configuration wizards I
mentioned this a little bit earlier
configuration Wizards allow you to step
by step configure and deploy a storage
area network without having to know all
of the details required to do so 5200
can be deployed in about five minutes
from a configuration standpoint and
you're ready to start plugging in X or
raid boxes automatically discovered and
configured and you don't have to worry
about what's happening and why it does
it all for you io stream guard the folks
in here that do full stream video video
as well as backups this is a really cool
feature that I'll talk about in a minute
but it's the type of feature and
functionality that qlogic works with
Apple on to ensure that we're serving
all of the needs of the folks like
yourself 1u chassis i mentioned
integrated power supplies things like
that as well as stacking up to four
units so you this is a scalable
architecture that allows you to license
the ports and for port increments start
with an eight port box and you license
and four port increments all the way up
to 64 ports
this is the other thing that's a pretty
a pretty interesting aspect to the stand
switching market if you look at some of
the competitive products as you grow and
scale ass and your purport cost really
goes up so if you look at a brocade box
or McDaid abox you're paying a thousand
dollars report to start out with that
doesn't include any of the management
and monitoring software but if you look
at a qlogic environment with the 5200
you see that we scale across from 864
ports at the same price pretty important
that way you can buy an infrastructure
piece such as the switch and start out
with a low port cast and grow all their
64 ports not worrying about having to
pay incrementally more money for ports
as you grow 10 gig is else this is
really a disruptive technology to the
industry we're the first switch to have
10 gig functionality and we use it on
the right-hand side there you see those
copper interconnect cables left-hand
side here is a picture of a large 64
port mesh that you would have to create
if you wanted six port environments so
if you wanted to scale that large that's
what the environment looks like requires
30 cables just to connect the
infrastructure together and by the way
those ports that you had to use you
can't use for your devices your tape
drive and your disk drive and your
servers anymore in a 64 port stack using
geologic on the right hand side you get
to use all 64 ports you don't lose those
those valuable user ports when deploying
a 5200 solution not to mention the
amount of cables and an infrastructure
mess that you would have to manage with
30 cables and the redundancy problems
that you might have the other thing to
mention here is the 10 gig speed you be
able to connect those switches together
with a 10 gig bandwidth and in a fixed
port environment even if you start out
with two switches and you scale the
third and the fourth you're only having
a two gig bandwidth between those
switches so that's one major aspect of
5200 that really brings a lot of
advantage break the reasons of use
so you know qlogic really looked at what
Apple had done with ex serve raid in the
gooeys and the tools and the ease of use
that's available today you know that's
one of the biggest things that you hear
about Apple floppers how easy it is to
use we took the lead there from them and
we were able to integrate that type of
technology into the software that we
have for the sandbox 5200 as well as the
stands for for management suite that
we've built really being able to provide
a stackable architecture with all the
value-added software that's easy to use
we really think that's important and it
really looked at it the best practice in
the industry because it provides you
with ease of use for here's iostream
guard I mentioned this before this is a
really cool feature with qlogic switches
it's exclusive to qlogic switches and
for those of you that aren't familiar
with the process when you bring a server
up and down on a sand or reboot a server
on a sand something called an RFC an or
registered state change notification
goes out and what that is is is that
server telling every other server on the
sand hey I'm here or I'm gone when that
happens it's only a split second but if
you have server a talking to disk a and
server be reboots and comes back up in
this RFC n goes around the sand fabric
server a momentarily pauses if you're an
OLTP environment running a database or
something like that not really a big big
deal if you're streaming video or you're
streaming a backup and all of a sudden
bettors of pause what do you think
happens at the screen not a good thing
if you're doing high-definition
broadcast so this this feature that we
have cold I oh sweet stream guard allows
that switch port to not acknowledge this
something called an RFC n that way you
can have continuous streaming video or
streaming back up without having that
port go down this is exclusive to the
5200 and really play as well with ex
sort of raid as well as the customers
that use this type of technology so best
practices in San interoperability San
interoperability is
very important and it's something you
need to look at when you're deploying
sand solutions historically
interoperability it really meant you
know connecting product day with with
product B doesn't work and you do the
same like a game you didn't really
understand how it worked and what to do
and most of that has gone away and a lot
of it has to do with things like the
sand interoperability guide that qlogic
put together this is something that's
available on our website and you can go
to qlogic calm and download it really
what it is is a guide that allows you to
build a sand it covers close to 60
different partners in the industry from
qlogic to Apple to to backup companies
like Veritas all the ISVs and i hv is
out there multiple storage vendors
multiple software vendors and what it
really is is a guide that allows you to
know what works with what and how to put
it together really key that you use a
document like this when you're building
a sand everything from the
infrastructure components all the way up
to the application layer to know what's
out there and what works with what great
document switch interoperability is also
something very important to qlogic and
something we work on continuously of the
folks in the room there's a number of
people that had deployed 5200 how many
of you have deployed brocade switches
that's great there's only a few of you I
like that so the idea here though is
that there are going to be a lot of
people that have used other vendor
switches and sometimes you're going to
hear somebody say you have to stay in a
homogeneous environment if you want to
add another switch or grow two more
ports you have to buy another brocade or
another macdade to switch that's really
not the case anymore you know there's a
lot of advantages to using something
like the sandbox 5200 as you continue to
scale your environment but at the same
time you don't want to fork lift out the
switches that you have so this type of a
document is the best practices switch
interoperability document that gives you
step-by-step procedures on how to
configure new technologies we like to
sandbox 5200
legacy technologies from companies like
brocade the fixed port architectures and
things like that so keep your existing
technology continue to scale your
environment and we give you step-by-step
procedures on how to do that and
something that really gives an advantage
to you when you're growing those
environments so the other thing we like
to do is really educate our users are
developers and provide documentation all
the documents that I've been talking
about fall under umbrella called qlogic
press and it's really a educational arm
of qlogic that is designed to provide
really great white papers and
documentation about comments and
deployment so as Alex describe the
various areas of Nass and daph and sand
of course to logic is focused on sand
and what we like to do is provide
procedures and step-by-step and
deployment scenarios for CN
infrastructures and this is a series of
guides that we built this one is an
extra raid focused document and it
really gives you an idea of common
topologies how to configure how to
deploy there was a lot of topology
screens that the daleks showed you and
those are in these types of guides plus
it tells you how to configure it so you
know that could be very complex very
quickly your lines everywhere and
devices connected in multipathing and
heterogeneous environments I've got a
network server i have a linux server how
do i connect that in how do i do lon
mapping how do i do lon masking we try
to take all of that complexity out of it
by putting together guides like this
with step-by-step screenshots that give
you the ability to go deploy effectively
so in summary qlogic is an i/o
technology leader we want to provide you
with the infrastructure to move your
data from your server to your storage
via a fan network we make fibre channel
switches as well as fiber channel hbas
the cross heterogeneous environments you
can deploy in numerous solution
environments as well as manage them from
a central location in addition we have
new technologies like the sandbox 5200
that you're going to see
it just continuously become easier to
use more functionality and as well as
price being reduced to enable more
people to deploy stands and take
advantage of all the aspects of shared
storage the ability to pull your storage
to more effectively backup your storage
as well as to use sand to better your
businesses we touched on
interoperability and how important
interoperability is to your environment
making sure that when you build sand
environments you take advantage of
things like the sand interoperability
guide that guy gives you all the
visibility to what works with what and
how to deploy as well as taking
advantage of things like the extra raid
configuration guide that we put together
and finally you're going to see apple
and qlogic working closer together to
continue to bring you solutions
documents and things like that and
continue to build environments to take
complexity out as well as allow you to
scale sand environments and really take
a competitive advantage or over you know
your competitors as well as building
solutions that allow you to scale from
small environments all the way up to
large 64 128 port count environments
thank
that was great Ryan I brought one of
those the sand configuration guys just
give you an idea how thick and complete
this is and I can tell you that with
this guide just about anybody is able to
build a sand with the next with the next
serveraid now obviously if you're doing
something like deploying an xserve rate
and xserve you're deploying you're
deploying X and you know the physical
part is really going to be hard I mean
it really is how do I set that switch
what do i do if I've got video do I need
to turn something on turn something off
and it's all in here and these are
available online from qlogic so thank
you ryan for doing that what i'm going
to do now is I'm actually going to turn
it over to to Steve a terrific from
candara who's going to really talk to us
about an interesting case study that we
put together with candara and where the
customer is actually a shiet day who's a
large ad agency it just happens to be
apple's ad agency just had nothing to do
with that actually but they were really
facing something that is something that
everyone faces out there and it's really
a mission-critical storage environment
and an issue they had for a long time in
heterogeneous city and candara apple and
qlogic all got involved and we're able
to really do something that's incredible
so Steven you want to take stage here
please thank you so one of the
interesting things when you start to
look at at what Apple geologic in Canada
are talking about is addressing the
needs of mission-critical storage but
not as the prices that the big players
in the big vendors are pushing on the
fortune 200 fortune 500 company what
they tend to miss is the fact that
there's a large not number of companies
that need mission-critical stores need
the mission critical the availability
the performance the performance
scalability that you could find in a
monolithic storage device but certainly
not at the forty dollars per per
gigabyte plus that you typically see
when you look at a monolithic device so
today I'm going to talk about shy pay
and how they build their mission
critical infrastructure on a combination
of cute
logic and era and Apple products then
talk a little bit about this new
architecture the ability to build this
intelligent a TA and then close it up
with the what that means how much can
you apply this to your infrastructure
whether you're an enterprise or a a
small to medium business and of course
that's not it so who is shia shia say is
one of the largest advertising firms in
the world in the u.s. they have very
prestigious prestigious clients
including Apple adidas others and the
application the primary application for
them is digital content so graphics and
photos those types of things so for
instance you might think that that's not
a lot this presentation is about a 90
minute presentation probably 50 60
slides how many people in the show of
hands thinks that this presentation is
under 10 megabytes how many think it's
under 20 megabytes now 50 over a hundred
it's actually over a hundred now we we
scaled it down to about 40 by
rationalizing it but when we were
developing it was over 100 megabytes now
imagine 500 professionals working daily
with multiple copies of presentations of
this level of quality trying to do
digital content and you can see how
quickly it can grow and this is their
business they're responding to very
tight deadlines of clients that are
paying a lot of money for advertising
don't want to see things missed and they
need to access all the time because
creative people like that work very long
hours very rigorous hours and so shy a
day this facility and in LA needed seven
terabytes of storage to handle 500
clients their new york office needed 21
terabytes so what did they have
originally what was the before the
before was a heterogeneous server
environment made up of Novell Apple HP
servers
all would direct attached storage so
every server had its own storage
different departments had different
projects on different servers and sure
enough the server that had available
storage that department didn't need the
storage while it's the top of the
department that really needed the
storage had a server that had no storage
on it so you ended up having a managed
and environment that was very difficult
to manage very poor utilization the
other aspect to to shia this is not a
Goldman Sachs suit that has a liberal ID
Department it was a very limited IT
staff and a limited budget so they
needed to focus on how can I build this
infrastructure that I can manage most
cost-effectively the other thing is
storage planning how do I plan for the
growth how do I plan for the scaling as
they brought more and more services and
more clients on it's an unpredictable
demand flow they land a big client they
need more storage very quickly how do
they respond to that environment and the
other thing is with it with a gas
environment backup is difficult and in
an environment where you have high
availability they need to be able to
consistently backup and restore the data
in this gas environment you have
different backup regimes so what we did
is we recommend it as sand we took a
pair of 5200 qlogic 5200 and what we
call it can Derek Apple APA appliance
which is a number of apple xserve raise
aggregated with canned areas network
storage controller and what that allowed
them to do is deploy a very easy sand
storage there's the the qlogic Sam
provides the connectivity the candara
Apalachee a appliance provided this this
stories that could be deployed in
seconds and provisions very quickly the
other aspect to this is the fibre
channel level of reliability when when
shia day went out to look and say what
can i buy they looked at fibre channel
storage modular and monolithic storage
they believe that's what they needed
when can
Aaron Apple came in and said no you can
do this using serial ata technology they
looked at us and said you got to be
kidding but we were able to provide the
kind of active active high availability
the kind of tracing and diagnostics that
you typically find in a monolithic
storage device with the kundera Apple
solution also it allows you to
centralize the data assets so I could
have the physical storage here of the
apple xserve raids and then with the
virtualization and the centralized
management provided by candara create
virtual lungs that i could very flexibly
provision to the various hosts and those
hosts were heterogeneous host including
HP Apple novell environment the other
key aspect to that is the error
detection and correction so when you
look at a San environment is when
there's a problem in the environment
maybe a flaky HBA or or something that
goes that goes awry on the storage it's
very difficult to diagnose the more
complex this and it gets the more
difficult but with an aggregating
element in there you can then do very
quick error detection and diagnosis and
the other aspect and the other benefits
of this is by moving from a dad's
environment to a San environment you d
link to store the storage from the host
consequently is very easy to replace
host to add new host and to buy smaller
servers
so were they happy so this is a quote
from shy a day where they said basically
we didn't think we could do our storage
infrastructure on a TA but candara and
Apple were able to deliver that at a
fraction of the cost of monolithic or
modular storage they were so happy that
after the evaluation with the LA system
they bought the 21 terabytes New York
system at the same time and they had
planned a phased implementation they
bought everything up front so what is
this partnership with candara and Apple
why are we together what are we doing
the candara Apple ata appliance take the
component takes the guts of what you'd
find in a monolithic storage device by
ways fine-grained virtualization and
centralized management and uses that to
aggregate Apple superior ata technology
so when you look at a monolithic device
what you typically find in and pull back
the oldest sheet metal you find
components that handle connectivity and
virtualization and management components
that provide raid processing and those
types of things and they're able to
scale because as you need performance
and capacity you add more controllers as
you need connectivity you add more disk
adapters channel adapter so you can
scale the performance as you need
modular devices can't do that but the
combination of candara plus Apple allow
you to do that they allow you to scale
both the performance and capacity
together also enterprise-class the high
availability the ability to do active
active failover the fault tolerance
those types of things are very important
when you look at environments we talked
about legacy storage and how messy can
be in there what that what we allow you
to do is start by building this APA
appliance and then start to bring in
your legacy storage and provide one
centralized approach so interoperability
becomes very easy
and so when you look it's now a new
approach to to to a storage architecture
monolithic approach was the first
approach big manageable scales but very
expensive modular starts small but but
but doesn't scale now you have this
intelligent ATA approach that allows you
to start small but aggregate all those
devices into a big one big virtual disk
in fact the combination of candara and
Apple allow you to build an architecture
that matches that monolithic
architecture what does that mean what
that means is you can manage everything
from one centralized approaching and
have a management GUI that will work and
allow you to scale and work like a
monolithic device so from a standpoint
of tiered storage when you look at how
do you apply ATA into your environment
most of the the existing players say
it's only good for secondary storage we
believe the existing players are wrong
they're just protecting their fiber
channel business today candara an apple
working together will allow you to
address that tier 1 tier 2 environment
the mission critical environments would
with the solution that we put together
similar to shy a day so the analysis i
did when i looked at the market and talk
to people about how much of their
storage is over served by fibre channel
and broke down via via applications we
really believe candara that seventy
percent of your data center can be
stored on ATA storage if the data center
if you if you're using fibre channel
modular or fibre channel monolithic
storage for more than thirty percent of
your data center you're really burning
money so where do you use intelligent
ATA this is a this gives you an idea of
the various applications that you could
put it in and as well as the capacity
requirements anywhere when you get over
a terabyte sorts of capacity this kind
of so
will work very well so I have run out of
time I'm a little bit over but I
appreciate the chance to talk to you
about Shia bay and how the combination
of the three vendors that presented
today built a mission critical
infrastructure out of very
cost-effective components fantastic
thanks thanks Steve I appreciate it the
you can see that today there's a lot of
choices in storage and the candara the
candara and Apple solution is an
interesting one it's one where you can
actually take high-performance storage
combine it with the feature set that you
get in very very expensive monolithic
storage those three letter acronym name
companies and you can really build a
system that is very cost-effective and
scales a lot better and so what I want
to do really quickly is wrap up and
we're probably not going to have time
for for Q&A today so what we're going to
end up doing is talking the afterward
but just to wrap it up real quickly I
think when you look at the summary of
what we've talked about the first thing
is to remember his budget budgets going
to dictate your infrastructure and we
think we have a building block storage
product that allows you to do that the
second thing is that complexity is
always going to equal cost so the more
complex it gets the more it's going to
cost and of course scalability is
something you need to address the front
and you have to really look when you're
talking about scaling it is what is your
real world usage of the systems and you
don't want to overlook the backup needs
that you have because a lot of people
say well I'm putting in raid storage I
no longer need to backup and nothing
could be farther from the truth than
that you only when you put in raid
storage what you need to do is be more
concerned about backup because you get a
false sense of security and then the
last thing is considered to your storage
because if you look at tiered storage
and you look at these approaches and not
spend that money on that monolithic
storage for everything you need you can
build an infrastructure that matches
your needs going forward and I can
guarantee you one thing and that's that
we're going to continue to drive the
ease of use from the price of storage
down I appreciate everyone's time today
thank you very much
you