disaster recovery looks so simple

>>> begin cynic mode <<<

Am I missing something? If I have a production system in one city and a development team or production system in another city why not make them disaster recovery failover for each other? For example, I have a system in Houston that runs my production xyz service. I have a company that I acquired that runs a production jkf service. Why not create a new instance on each of the databases and use dataguard between the two systems?

Given that the 10g database contains DataGuard with either physical or logical backup, the only cost in this configuration would be to add more memory and disk to the two systems and make sure that we have a good network connection between the two systems. Is the problem that no one is running 10g? No, not really. I have seen a bunch of companies that are running on 10g. Is the problem that no one has enough memory or cpu on these systems? Ok, I will give you that memory might be a problem. Most processors are only running 50-60% so there is enough CPU power. Is there a problem with the network? Since most IT shops don’t want to talk about what their border routers traffic looks like, the most likely problem appears to be network bandwidth.

Given that this is a bundled option with 10g, I don’t see why more and more people are not using DataGuard as a disaster recovery and business continuity option. With the current implementation, you don’t even have to copy all of the tables, just part of them. It seems like it would be easy to figure out which ones are super critical and would cost money for the corporation and make sure they are up and recoverable quickly.

>>> cynicism off <<<<