different ways to skin a cat

So I think that I am overdosed on education and training classes. I’ve
now been to database administration, database new features, database
clustering, data guard, grid management of databases, and last but not
least, using a database to back end a portal server. Some key things
that I got out of the classes bring me back to key things that I got
out of college. No matter how exciting the subject sounds, a bad
instructor can really screw up a class. Conversely, no matter how
mundane a topic is, a really good instructor can make a class
interesting.

It is interesting that there are different things that you can plan to
fail and different solutions to make it work properly. Tuning and
making an operating system was something that I did a bit of research
on, specifically the storage systems. Job scheduling has also been a
long standing research project for operating systems starting with
multiprocessors and expanding to networked systems. It is refreshing to
see the same problems with database systems and different ways of
solving these problems. One of the slides that I saw multiple times was
how to protect data and data access controlled by a database.

You can protect the data using RAID/Mirror disks either with or without
hardware or operating system RAID (this is done using a new feature in
10g, ASM). You can protect against data loss from system failure (this
is done with Data Guard) and can be done either by copying physical
blocks across as they change or logically by executing  parallel
updates to databases. You can also protect against data loss by running
a distributes system that partitions the database and takes over in the
event of a node failure. This is a more complex solution because it
requires you to use network storage so that data can be accessed from
multiple nodes. It seems like more and more IT departments are using
network storage. It isn’t quite to the point where many small
businesses or homes can afford such a nice feature.

The one that shocked me the most was that you can even protect your
data against the weak link in your IT staff. If someone deletes
something or wipes something out, the database has the ability to roll
back the mistake and restore the transactions upto a point. This is a
lot more interesting than backup and restore.

The other interesting thing that surprised me was that you can split a
database and run it on different machines. There are multiple ways to
do this. If you have multiple writers, you need to cluster your
database so that record locking can happen on a row level. If you have
one writer and multiple readers you just need to replicate your data
either through block update synchronization or sql command
synchronizations between multiple boxes. These updates can be
synchronous, async, or deleayed by minutes to hours. One example that
was given was the way that Apple does iTunes. One writer exists to
create the music index/repository. They replicate the database across
multiple systems and let large numbers of clients come in and search
for music titles. The searches go against the replicas and not against
the primary master. If the primary master fails for some strange
reason, one of the replicas becomes the primary and continues to feed
the other replicas.