more of a good thing (a continued first look at OracleVM)

So it turns out that the error that I got was not a Release 4 Update 4 issue but an issue of stupidity on my part. I did not want to use oracle as my password for all of the features. When it came time to enter the oc4jadmin password I entered the new password which caused the system to fail. I looked at everything again and noticed the statement “The defult password for oc4jadmin is oracle”. Once I entered the right password, everything installed and worked well.

18) At this point I now have a R4U5 installation with the VM Manager running. From this I login to the OVS web page and look at the VM Manager interface. This interface is interesting (once I login – the default password oracle bit me again) and I realize that I now need to read the VM Manager Users Guide to figure out what server pools are, how to assign resources, and how to correlate servers to this management interface.

19) after reading the stinkin manual I decided that I needed to import an iso so that I could create a new image into dom1 and play with two systems running on the same machine. This turned out to be a little more difficult than expected. I put the iso files in the /OVS/iso_pool directory and tried to do an internal import and ….. nothing happened. I did not see any images and could not figure out how to import them. Naturally, I went to google and did a search for “oracle vm import iso”. This lead me to a blog by Freed D’Hooge. He ran into the same problem and found a work around. It seems that you need to put the iso files into a sub-directory and then logout and login from the VM Manager console. This seemed a bit odd but surprisingly it worked. I took the four iso files that I had and created a R4U5 directory. I then put disks1-4 in that directory and magically they appeared in the iso import. Needless to say, I was very excited. I now have a collection of files in /OVS/iso_pool/R4U5 that successfully imported and allowed me to try and create a new virtual image.

20) Feeling confident that I was ready to create my first VM, I went to the create screen and realized that it appeared to be easy but threw an error. I now needed to figure out where the errors were reported. I followed the path of creating a virtual machine from installation media, selected a server pool, selected a fully virtualized, and selected an ISO file. Things looked good. The creation happened and when I refreshed the screen it showed an error but no links to the error. I had to expand the “+” on the left of the screen only to find out that the processor in my Dell D620 does not support virtualization.

21) At this point I realized that I really need to find out what virtualization and paravirtualization is. The concept of paravirtualization scared me. I do not like the idea of having to run an applicaiton on a modified kernel. This brings up support questions and what is really causing the problem doubts that I wanted to avoid. I wanted hardware virtualization so I went to the Intel site on virtualization. It seems that there is a new Intel product line that is recently new that supports virtualization. The technology is a set of chips, bios updates, and software to make it run. The Intel processors that support this technology are the Intel Core 2 Duo, the Xeon 3000, 5000, and 7000, and the Itanium processors. With this information I then found that AMD also has similar technology available in the Opteron processor. Some laptops that support this chip architecture are the MacBook Pro, the Dell D430, D630, and D830 laptops, and the Toshiba Satellite and Tecra models. Basically it looks like any laptop built 2007 or after contains a Core 2 Duo processor that supports virtualization. I just need to figure out a way to upgrade my 18 month old laptop to support this new technology.

22) At this point I realize that I can’t push hardware virtualized machines to the hardware that I have and start reading the VM Server Users Guide to figure out how to create a paravirtualized image. After reading for a while, I realize that the documentation is not the best in the world and I need to start looking for some tutorials and other people who have done this. It seems like I should be able to clone dom0 and make it a new image that I can play with. I should also be able to download an image from someone and run it easily. I guess I will get some real work done today then start looking more. If I were looking at purchasing this technology I would have come to the conclusion that I have spent roughly 20 hours playing with this only to find out that the hardware that I have does not fully support the software. I would give up and put this project on the shelf for a few months. It is a very interesting technology but I personally am not in a position to take advantage of it. Fortunately, I work for a big company and am given the luxury of playing with new technology so I will continue to explore.

next up…. creating a paravirtualized image, looking for other images, trying to run without a VM Master and using the command line only interfaces, and trying to dual boot with Windows on one image and VM Server on another (I hate doing this but I can’t get iTunes to work in Linux and have to download podcasts on a regular basis).

too much of a good thing (a first look at OracleVM)

When you try to pick up new technology, what is the process that you follow? Do you read the documentation first (, do you read the tutorials first (, do you look at the wiki (, or do you read a discussion forum (, or do you join an online community ( There is also the tried and true method of buying a book or going to the local users group.

It seems to me that this is an overload of information and I need to rethink how I learn new technologies. I recently decided to learn the OracleVM technology because I am familiar with virtual machines, domains, and high availability architecture. I am also somewhat familiar with Linux so the learning curve should not be that great. Let’s look at my learning process and figure out if it is the right way of doing things or there are better ways.

1) I heard about the product at a keynode address. I naturally went to the otn web site and tried to find the product information page. Unfortunately, all I can find is how to download the product. Interesting but not what I am looking for. Since I don’t want to dig around too much, I follow the download link to see if it will take me to the product page.

2) The technology/OracleVM page does give me a little information but not much. It does have a link to the product home page so I jump there. Before I go, I notice that from this page I can read the documentation, download the code, participate in a wiki, be part of a discussion forum, and watch a podcast on the product with one of the product managers.

3) Since I want to find out what this product is before I invest too much time, I go to the OracleVM product home page. At this page I can read a short paragraph on the product, look at the press release, read partner reviews, read a FAQ, and look at the product datasheet. I also notice that the product is better, faster, free, and available for download. This does not help me much but it does make me feel better about the time I have invested.

4) looking at the OracleVM datasheet, I at least get an idea  of what is required to make the product run. I note that it supports Linux and Windows as a guest operating system but there is something called hardware virtualization that I need to learn about as well. I also learn that I need to have two computers to make this product work. These computers must also have static IP addresses which scares me a little. I find out the processor, memory, and disk requirements for the VM Manager which also must run on Linux. I also find out that the majority of Oracle products are supported in a VM with the exception of RAC. There is also a link to a metalink note (464754.1) that has more recent information on product support. After reading the document again I note that the VM Server runs on bare metal and does not need an operating system to run on. This intrigues me so I write down to learn more about this as well.

After reading this document I realize that the two laptops that I have will need to be reconfigured. The one that I primarily use is the more powerful and should be used for the VM Server. The older one needs to be the VM Master. The first thing that I need to do is make sure that I have the Linux OS installed on the older laptop and put a new disk in my newer laptop.

5) At this point I realize that I need to know more so I read the press release to look for link to more information. Surprisingly, from the press release I learn the price of the support and that this product is based on the Xen Hypervisor.

6) At this point I realize that there is a list of things that I need to learn about so I look at the FAQ to see if there is more that I need to research. From the FAQ I learn about pricing, licensing, that Oracle On Demand is using the product, and a repeat of the details in the press release.

7) To learn a little more, I read the whitepaper to figure out if I should invest more time or not. The net that I learn from this is that OD had upgraded their dual processor single core 2U systems with  dual processor dual core 1U systems. This doubles the compute capacity of the existing hardware while reducing the rack footprint thus creating expansion in the data center. Interesting information but how is this relevant to VM? In reading further, I learn that one option is to run multiple web/app servers on the same hardware box or but them in different VMs isolated from each other. Interesting concept. I have run into multiple virtual apache web servers and upgrading one while not upgrading the others made the project unmanageable. All of the web servers had to be upgraded at the same time which was a political nightmare. The OD solution also gives a new twist to HA because I can move a VM to a new box that is ready to accept the image. The requirement for the new box is to have the VM Server installed and the product can be moved to a higher capacity system or provisioned quickly in event of failure. Interesting features, I guess I need to learn more.

8) At this point I go to the OracleVM wiki server to learn more. From this page I get links to a podcast, documentation, a discussion forum, code to download, and an installation video. I also notice that there are two sub-pages on hard partitioning and the management API. There is also another sub page on installation.

9) Since I am now interested in moving forward, let’s see what is required to install the product. This is done on the wiki sub page which takes us to a youtube video in installation. In the releavant content at the bottom of the page I notice that there are discussion on vmware, ovs-agent, and paravirtualized drivers. I need to learn more on how this compares to vmware, what ovs-agent is, and what a paravirtualized driver is.

10) the video is 8 minutes in length. This leads me to believe that the installation of the server should only take this long. Promising. From the video I learn that it is built on the Xen 3.1.1 Hypervisor. There are defaults to install this which includes partitioning. It looks like I could create a multi-boot installation because it uses a grub loader. It would be interesting to see if I could install Windows and VM Server on the same system and dual boot. When you install you do need to have a static IP address, gateway, and primary dns as well as a hostname. The one used in the video was a fully qualified domain name. You will need a password for the VM Agent which is used by the VM Manager to communicate to the agent. There is also a root password for the VM Server domain 0 which is the first guest OS running on top of the Hypervisor. This domain is used to manage the other guest operating systems. The video does go through a full installation. It contains a compact Linux installation that fits on a single CD. It installs a boot loader, kernel, and rpms on the disk. Once the install is completed, we reboot the computer and it comes up to the domain 0 instance. The boot process in the video took about a minute. The installation took about 6 minutes. The boot looks like a typical Linux
boot with the appropriate messages and notifications on startup. Once the boot is complete, you need to agree to the license on first boot. When this is done you get dropped into a login prompt for the dom0 instance.

11) at this point I download the code for the VM Server and the VM Master. To do this I need to supply my name, company name, email address, and country. I also need to agree to export and legal terms. Once I do this I get to download a 304M VM Server 2.1 binary, a 534M VM Manager 2.1 binary, or a 465M VM Server 2.1 source distribution for 32 or 64 bit. Even though the download page has two options, the binaries downloaded for the two different packages are the same. I also have the choice of downloading Linux Release 5 and Linux Release 4 Update 5. I will defer downloading these till later because they are 4G in size and will take a while on my cable modem.

12) while the system is downloading I read the documentation. The Quick Start Guide is a good place to start usually. There is also an installation guide, release notes, and a users guide. The Quick Start Guide is 6 pages in length and does not supply much more information than I have learned before. It gives a good explanation of the VM Agent which I haven’t seen before now. It give a good picture of the product mix which also helps me understand how the product works. I watched the video on installing the Server and it looks easy. The VM Manager looks like a typical runInstaller program typical for Oracle software. It looks like I will need to make sure that port 8888 is open in the firewall. I will also need to learn more about a Server Pool Master and Utility Server. It looks like the first steps in getting the system running is to go to the Manager page and create a server pool master. I can also load templates and ISO files for VMs. I need to learn more about templates and how to create/use them.

13) The Quick Start guide was interesting. let’s look at the Manager Installation Guide to learn more. This gives yet another interesting diagram that talks about different server pool managers and server pools. The document is 12 pages long so it isn’t a difficult read. It does give a good definition of a pool master, utility server, and virtual machine server. It also talks about storage resources which I will need to learn more about. The document also talks about tightVNC for remote console into the VMs as well as libaio and the use of ports 8888 and 8889, both use TCP/IP. Other ports that are needed are ports 8080 and 1521. The VM Manager has an http server, 10g Enterprise Edition, and OC4J server, and the need to connect to an SMTP server for problem communications. The code on the disk also contains Chart Builder and XML-RPC 3.0. I’m not sure what these are used for but will need to learn more. The install process looks simple, you can install or uninstall the product. You will need to provide two port numbers and a password for the database. You also need to provide a hostname for the SMTP server as well as an email address for admin notifications. The VM Manager is accessed by web page at port 8888 and extension /OVS. There is an App Express management interface available at port 8080 and extension /apex. Any errors are logged into /var/log/ovm-manager. There are three log files, ovm-manager.log, xe.log, and oc4j.log. The VM Manager is started with the service oc4j start command and is configured to start automatically by running an /etc/init.d script.

14) at this point the download is complete and I need to unzip and burn these images to a CD. I do this on my windows desktop and boot the VM Master system with a Linux image.

15) while the CDs are burning, I read the VM Server Install Guide. This document is 24 pages in length and basically tells me the same thing that the 8 minute video showed me. It does appear that you can assign the box to boot with DHCP but the assignment must be a static assignment. The document goes through different installation methods (hard drive, CD, NFS, FTP, and HTTP) so the document is really about 15 useful pages that are duplicated with minor differences.

16) Ok, at this point I realize that I don’t know how to create a template, what a paravirtualized driver is, how it compares to vmware, and if anyone else has dual booted a VM Server and Windows/Linux for a dual boot installation. I typically don’t read the release notes but I will in this case just for grins because it usually bites you if you don’t. From the VM Server Release Note I learn that there might be a clock drift issue which is also an issue for vmware, there might be an IP address issue, there might be an issue with network latency, and paravirtualized guests might be needed for Enterprise R4U4. I also learn that SELinux is not supported in this release which is interesting. I wonder why and if other flavors of Linux are not supported. There is some discussion about disk partition emulation and disk parameters that need to be set. I need to learn more about this before I really screw up something by playing around too much. From the VM Manager Release Note, there isn’t much new to learn. I will probably need to download the ovm-console and install it on the VM Master to use virtual consoles. I will also need to download the tightvnc package as well.

17) At this point I am feeling brave and the CDs are ready. I first start with the VM Master. The installaion is trivial as expected. I then install the VM Server. This is a little scary because I had to wipe out a disk and install it in my laptop. This is different from VMWare workstation because workstation runs on top of Windows/Linux and can easily be discarded or refreshed. The VM Server is a little different. I am running on the hypervisor and the dom0 Linux installation does not appear to be very functional. It does not contain an X11 installation so the normal admin tools that I am use to don’t exist. I need to relearn the command line versions of all of these. I guess I got sloppy in my admin skills and started relying upon the easy GUI interface. I knew there was a reason that I still used vi and didn’t go to emacs or a more advanced editor. I could not get my wireless interface working on the VM Server so I had to break out a 4 port network hub and network my two laptops together. It wasn’t a big issue. I just had to reconfigure my network interfaces and make sure that I didn’t become a router for the world through my two laptops. I will also need to figure out how virtual networks are configured and how they work since I will have to share a network interface between guest operating systems. I guess that I will need to read the Users Guide at some point. The first task is to play with the master and see if I can create a Linux guest and Windows guest and push it to the pool server/vm server on my faster laptop.

As I was installing the VM Master I kept getting an error when creating the connection pool. The installer would come up with -Failed at “Could not get Deployment Manager”-. I decided that it was the computer that I was using and switched to another computer. Unfortunately, they both got the same error. It looks like I was using the Enterprise Release 4 Update 4 which is not supported. I am now downloading the Update 5 as well as Enterprise Release 5 to see how well they both fare…. I had to re-read the release notes to see this requirement. You need update 5 to get it to work. I states this but does not state why. I guess I know why now.

At this point I know that I have two systems that can work as VM Servers. Bo
th appear to work very well. One has a funky wifi card that even R4U4 did not like. The other has a slower processor but both networks were found even by the VM Server/Hypervisor. I guess now I need to start reading the VM Server Users Guide and see how to get into the hypervisor manager and see if I can create a new domain and launch it. I also need to figure out what the paravirtualized thing is and see if I need it on my machine. While I am waiting for R4U5 to download and install I can play with the hypervisor manager and read the documentation.

18) more later….. I actually need to get some work done today….

Oracle VM

Oracle VM

The idea started with cheap commodity hardware and enterprise manager. The idea was that we could deploy the database across cheap hardware in a RAC environment. If you take low cost components you can build a large database on multiple computers. You then use enterprise manager to measure and monitor these systems.

The natural question is what is the next step. We can move parts and pieces of a database across cheap commodity. We can deploy to operating systems. Taking this to the next layer we would like to have a base hardware component that you can blast an image to and treat the hardware like a RAC service but generalize this.

We considered using VMWare. Unfortunately, we found that 30-40% of the box was consumed by the virtualization layer. This does not make sense. The second factor was that VMWare cost a lot of money. The final thing was that we wanted complete support. If we had to link into another product, we could not support this. The philosophy is that the Linux team combined with the database and middleware team can come up with a solution that is better. The measured penalty for Oracle VM is 10%. This is the typical price to pay for virtualization.

The biggest benefit for the Oracle products is that the software is free and support is substantially cheaper than purchasing VMWare and paying for their support.

Question: what is Oracle’s support policy for VMWare
Answer: we do not test our products on VMWare. If a customer calls with an issue and the issue exists on native hardware, we will support the issue. If it is not a known issue, we will ask you to call VMWare first then us.

Question: what is the 30% overhead going to
Answer: I/O and context switching are the leading issues. The Oracle VM is a little more scalable based on our tests. Most of this is being driven by requirements from OnDemand. The Austin data center is currently constrained based on power. The solution is to expand resources through the virtualization layer rather than paying the estimated $70M to expand the data center.

The operating systems that we currently certify are RHEL4, RHEL5, and Windows. Windows does not have paravirtualized drivers. It does work on Oracle VM but it is currently slow. I would not use it in production. I might use a limited functionality that is not compute bound on Windows and put the rest of it on Linux.

We will be publishing the benchmark numbers but we did not have enough time before the announcement.

Oracle VM is managed with a standalone console. In 2008 it will move into enterprise manager and the standalone console will still be shipped. The way to manage is through the management console.

Question: there are problems with RAC and virtualization. What is being done with thiss
Answer: The database is certified on VM. RAC is currently not certified on VM. We are working on this but it currently is not supported. We will solve this. It is high on our list and should be out in 2008.

Wim Cokerts – new manager of Oracle VM

The code is based on the Xen technology. We compared our deployment to VMWare ESX.

The layout of the solution is a Hypervisor running on the host hardware. The Oracle VM Agent runs on top of the hypervisor. The guest operating systems run as domains on the agent. There is a VM Manager that manages the virtual images and can ship it to the agent when requested. The Manager is an OC4J container and Express Edition of the database. The box needed for the manager does not need to be very big. The sample that was used during Charles Phillip’s presentation was a 1G single CPU machine with 1G of memory. The OC4J is small as well as Express Edition being small. It asks for two passwords and some ports for communication.

This is different that what RedHat and SuSE is doing with Xen. Instead of embedding this in the operating system, we are putting it on a management server with an agent to communicate from the host. The machines are registered in the repository and can be placed into a group. It is important to make sure that the group has the same architecture so that virtual machines can easily be migrated from one machine to another. The software comes with the Fedora style license so that there is no redistribution rights to worry about.

When the software is deployed there is a master server, a utility server, and a vm server. There is one master server that everyone attaches to. The utility server is typically used to clone machines, distribute images to targets, and offload the manager or target servers from overloading their I/O system.

The linux kernel is a 2.6.18 kernel. Basically anything that runs RHEL4 or RHEL5 should be good enough to run this machine.

The manager interface is a typical Oracle interface. When you login you have a list of virtual machines that are being managed. There is a resource tab that allows you to create templates, push vm images, manage ISO files, and manage shared virtual disks. The servers tab show the servers that have been registered. This allows you to reboot, poweroff, and migrate services. You can also create server pools that have specific jobs. The Administration tab allows you to create userid for server pools and allow these users to manage the pool.

In the VM tab, you can look at the virtualized machines that have been created and how they are running. VNC is used as the console of the guest operating system. Standard VNC Viewer can be used to manage the guest OS. The actions button allows you to deploy, live migrate, clone, save as template, pause, unpause, suspend, and result. For Oracle University, for example, we can clone a classroom image and push it out to all of the desktops in the classroom. This is typically done and has been done with scripts. It can now be done by the instructor using standard templates for the class.

There is no Workstation edition, server edition. The full box gets taken over by the agent. We can not install this on top of an operating system.

One of the features that will go into OEM will be monitoring of how well a guest is run and how much it is used. There currently is no monitoring to make this happen. There currently is not a way to clone a machine and exec post install scripts to do things like change root password or rehost an ip address. We try not to go into the guest os to make this happen.

The network currently ties to a network connection. There is currently not a vlan support in the vm engine. The network configuration is a slave to the linux kernel that vm is running on. This is an area worth looking into more because it sounded like the management interface does not allow you to manage this but the tools do exist on the VM agent to make this happen.

We will be tracking the Xen development group and tracking the code changes in Xen. We will push bugs back into the Xen code base.

question: what shared storage is supported
answer: anything supported on RHEL5 will be supported.

You can select from the target hardware which vmimage that you want to run. This can be done using pre-installed images on the local disk or using the wget command to download the image and run it.

The licensing model has not changed. The license is based on the number of CPUs on the system and not the number of CPUs in the virtual image. Nothing has changed from the previous model. You can show that you are only binding to a hard number you similar to hard partitioing/lpars/or domains then you can be licensed on the fewer number.

If you have a Windows system you are currently better running on ESX until we have the drivers to make Windows run faster. Long term we want to run faster in Oracle VM.

What’s New with Oracle Data Guard

What’s new with Oracle Data Guard

Larry Carpenter – Oracle
Sreekanth Chintale – Dell

Mirrored copy is not good for
 – up to date reporting
 – testing while providing continuous protection
 – fast online backups
 – preventing mirroring of physical corruptions

Traditional DR does not provide what people need

DG 11g has evolved to be part of an integrated part of IT operations
 – sync reporting replicas
 – snapshot sby for testing
 – validation prior to apply
 – fast failover and switchover
 – automatic failover

improved data protection
more manageability
increased ROI

improved data protection
 – faster redo transport
     new streaming protocol, async and arch transport. eliminates internal network acks during redo transport. Result is more efficient network utilization, less frequent buffering of workload peaks at primary location, faster gap resolution. 40MB/sec throughput with less than 1ms buffer spool between primary and standby.
 – advanced compression
     dataguard aotomatically compresses data transmitted to resolve
gaps. reduces xmission time 15-35%, bandwidth consumption by 35%. It
does require advanced compression to be enabled.

    log_archive_dest = service=dbname ASYNC
 – lost write protection
      faulty storage hardware/firmware may lead to lost writes leading to data corruption. 11g will catch this through data guard. A redo can verify from the physical standby.
     alter system set db_lost_write_protection=typical;

 – faster redo apply and SQL apply
      (Redo Apply)OLTP – 95% improvement 24MB/sec vs 47MB/sec

      (Redo Apply)Batch workload – 130% improvement (48MB/sec vs 112MB/sec)

      (SQL apply)LOB inserts – 50% improvement

      (SQL apply)support for DDL in parallel on standby. this is a potentially big probom on 10g and 9i

      (SQL apply)OLTP – 19-22% improvement
 – faster failover and switchover
      automatic failover for max performance mode. default setting is 30 seconds,  minimum threshold is 10 seconds. Async mode only.
       OEM will automatically restart the DG observer on a second host if the primary observer host fails.
 – enhanced fast-start failover
       immediate automatic failover foruser-configurable health conditions. condition examples – datafile offline, corrupted control file, corrupted dictionary, inaccessible log file, stuck archived, or any ORA-zyx error
        apps can request fast start failover using API
 – transient logical standby
        execute rolling database upgrades using a physical standby. You can temporarily convert physical to logical and perform upgrade. Once the upgrade is complete, revert to physical standby. This is a big option which allows for rolling upgrades of physical replicas and have the upgrades pushed to primary as well.
 – new grid control HA console

More Manageability
 – SQL Apply
       more data types. XMLTypedata type (CLOB)
       support TDE, encrypted table spaces, FGA, VPD
       role specific jobs – ie, jobs can be defined to run if in primary or if the system is a standby. This allows the same jobs to be on all systems. There is no need to have different jobs on different systems and reconfigure jobs when a system switches.
 – Better RMAN integration
       direct remote instantiation of remote standby database without the need for intermediate storage. New keyword “From active database” for the RMAN process. .
        block change tracking on physical standby databases
        archive deletion policies are enhanced. delete when shipped, when shipped and applied
 – Better security
        sys user and password files no longer required for redo transmission authentication. auth possible using SSL but requires ASO and OID. It issues PKI certificates and ships certs
        redo transport auth can use a dedicated non-sys user. the user must have sysoper privs. This required password for this user to be the same at the primary and all standbys.
 – Mixed Windows/Linux
         physical standby can be done a Windows ald Linux system in the same DG configuration. It required same endianess on all platforms.
 – Enhanced Data Guard Broker
       enhanced for fast start failover. No bounce required to change protection modes from Max Perf to Max avail

Increased ROI
 – snapshot standby
       perserves zero data loss. continuous redo transport while open
read-write. (See notes below from Dell). There will be a new button on the 11g OEM console to make this happen for 11g.
 – active data guard
   – real-time query
   – fast incremental backups

Dell 83% platform Linux, rest Windows. Most 10g database (1168) some 9i (333) a few 8i (57). Spread across manuf, HR, sales, marketing, internal apps. When a database shuts down, the factory shuts down. All of the systems are managed through OEM. Targets (8176), cluster db (326), db instances (1267) hosts (1570), ASM targets (844)

snapshot standby – fully updateable standby. provides DR and data protection. Requires flashback DB. Single command in 11gR1

testing and updates can be done on standby then pushed to production. There is no difference between production and standby.

prepare standby, prepare primary
create restore point on standby
convert physical standby to rw
use rw standby
roll back to restore point
ship redo logs to standby (10g only)
apply redo
make standby again

In 11g the difference is the redo logs are shipped to the standby.In 10gR2 you need to do this manually. There is a whitepaper on OTN that describes this. In 11g this is a single command.  On 11g it does require the data guard broker.

my opinion is that this is a cool technology. This is good stuff that solves some real problems. In my opinion this is going to be something that will have people move to 11g quickly as well as use DGuard and OEM.

Web Service Enabling Spatial Applications

Open Standards based for Web Services and Spatial

Some of the services that have been standardized
 – location services (routing, mapping, geocoding, directory services)
 – catalogue services (discovery, browse, and query against catalog servers)
 – map services (request/provide maps and info about the content of a map)

 – access/search/update/delete geo-spatial feature
 – access/update securely
 – manage feature privs at an instance level
 – real-time fransfer of feature instance using standards

 – use SOAP for request/response
 – XML over HTTP Post method for request/response
 – Spatial for feature instance storage/retrieval
 – implement OGC filter specification for feature search
 – use WSS/LDAP for auth, row level security for instance-level priv mgmt and WSS for secure transfer of feature data
 – support publishing feature types from database data source (complex comumns, netsted table, XML types)
 – support publishing feature types from external data sources
 – implement token based locking to support WFS locking protocol to support long transaction model to artificially lock rows in the database
 – implement feature cache in middle tier to reduce spatial data xfer from DB to App server
WFS Operations
 – get capabilities
 – describe feature
 – transactional – getfeaturewithlock
                        – lockfeature
                        – insert, update, delete
the transaction and locking can span sessions. The only way to unlock this is through the WFS handler which is integrated into the OC4J/J2EE container. There is an expiration time that is default or specified by the client just in case the client goes away for a long time. This is done with triggers on tables/views to make sure that it is properly locked and the same client comes back with the proper token.

WFS Metadata
 – feature types, type tags, type attributes
 – complex types
 – spatial operators

there are two data sources that are supported – relational data type – PLSQL API and Java API done through XSD data types. The Java API is typically used to register feature types and feature type metadata

There two use cases – type supplier and type consumer

fully compliant with the WFS 1.0 spec

auth is done client -> oc4j using SOAP/WSS. We then use standard oc4j connections to connect to the database. This allows the use of VPD and user views on top of the database.

demo using OEM for App Server to configure the server. We define an application and deploy it using the application administration. When we define the app we also define the security service and the data sources. In this example we use a WFS connection. We also create a web service with the certificate and signature key definitions for security.

the client side is developed using JDeveloper. The feature type/app comes from an xml description of the spatial data type. We create an empty project, add a web server proxy using the WSDL file defined by the spatial service. At this point we have a non-secure connection and need to secure the connection. We go to the proxy and edit the security parameters. In this example we use a digest with no encryption (for demo purposes only) and a public keystore and private key of the client and server. At this point we have a secure proxy and just need a password for the signature key alias. To debug this exchange we run the http analyzer in JDeveloper so that we can see the request coming from the client and the answer from the server.

At this point we have some content that we wanted to publish. We define a JDeveloper client and link the security to that of the server security policies. We then ran a test to look at the connection and verify that it is working properly. At this point we can encrypt the data and turn off the html debugger.

The second demo is to use a relational connection to the data. To do this we use the map builder to load data and push our data into the spatial database. We do this by connecting to the WSL service and looking up the interfaces that we can use. We connect and look at the features, select a spatial column (GEOM) and correlate some element in the data (State Name) to the map. At this point we can create a query and return the map data with the state name placed into the state that the query finds.

OpenLS is a translation service that is done on the back end. This operates as expected according to the standard.

Catalog Services are a little more complex. The catalog request can be spatial or relational in nature. The catalog service server returns metadata associated with the item in question. To implement this it uses SOAP for request/response and XML over HTTP Post method for request/response. The difference is that ResultSet caching will be support to retrieve records from a single query across different web locations. This allows the user to scroll through related material that is close to the search object returned. The cache returned is currently not tunable and is handled by the server. Future releases will give hints to the cache service on what to populate.

web map service supports getcapabilities, getmap, and getfeatureinfo

this was an information packed presentation with lots of technical data. I recommend that you download the presentation once it is available. Unfortunately the best part of it was the demo which was a live demo of developing with JDeveloper and MapViewer.

caching query results in 11g

Cache consistency
 – consistency maintained by receiving notifications on subsequent roundtrip to server
 – in a lreatively idle client, cache can trail behind DB no more than CACHE_LAG milliseconds
 – changes invalidate affected cached results
 – cache bypassed if session has outstanding xactions on tables in query
 – free application from manually having to check for changes, poll the database, refresh result-sets
 – it is very difficult to program these changes because the database does not expose all the elements required.
 – you need the 11g client and the 11g server to make this work properly
 – you need to enable statement caching to make this work. It can be done in the client or on the mid-tier

OCI consistent client cache enabling
 – works with OCI-based drivers. Requires enterprise edition to make this work
 – on server set CLIENT_RESULT_CACHE_SIZE, must be non-zero upto 2G, CLIENT_RESULT_CACHE_LAG – 3000ms default. Setting LAG to zero disables lag
 – on client (set in sqlnet.ora)
    – OCI_RESULT_CACHE_MAX_SIZE (optional)
the client values override the server settings and can be done temporarily

the query requires /*+ result_cache */ hint in the code. This will be automated at a later date

look for candidate queries in AWR
 – frequent queries in Top SQL
 – identify candidate queries on r-o/r-mostly tables
 – sprinkle the hint on queries and measure
monitor usage
 – client_result_cache_stats$

Tom Kyte – How do you know what you know…

Tuesday Morning Keynote….. Tom Kyte

there have been 14 production releases of the database since I have been at Oracle. When a question comes up I have to think of the answer and think of what version the answer is correct for.

New ideas almost always come from those uncommitted to the status quo – from outsiders

educated incapacity – a barrier to creative ideas can be experience

assumptions – incorrect assumptions are barriers to creativity
                   – incorrect assumptions laed down the wrong roads

things like group by sorts the data. This assumption is wrong. order by sorts the data, group by sorts it by hash values. Emperical evidence is not always the correct view.

judgements – we tried that once…. it never worked….. the will never buy that

An exercise was done yesterday comparing a DBA with scripts against someone who is trained with the tools. The result was that the DBA with scripts could detect the problem quicker but could not fix the problem. The DBA with the tools was able to find the problem in a similar amount of time but the tool fixed the problem quicker.The focus for the tools that are being developed are to replace the mindane tasks, not to automate the function of the DBA.

Interesting comment. What I am good at today didn’t exist when I was 12 years old. It is important to remember that everything is transient. It is important to know the how and why and not the technology. The data is the important part that lasts, the process, application, and technology are what changes.

ROI of Oracle Database Management Packs

Noel Yuhanna
Forrester Research

ROI of Oracle Database Management Packs

all enterprises should focus on database manageability and automation to lower cost and improve DBA productivity. Databases tend to get created for a variety of reasons and never get destroyed. Over years the number of database instances tend to grow and cause performance problems or resource conflicts with other instances running on the same server. You are now able to manage more databases now than ever before but automation is needed to make this happen.

trends in database management 2007 – 2012
 – data volumes growing at a fast pace, doubles every 18-24 months
 – newer applicaitons like unstructured data and RFID are causing information to grow at faster and faster rates
 – 30% of large enterprises (> $1B revenue) have more than 1TB+ DB heading to petabytes
 – tools are helping with management of larger databases.
 – trends are centralized administration, automation, and virtualization
 – automation saves more money than outsourcing/offshoring of DBAs
 – consolidation of hardware is a trend with multiple database instances onto fewer servers part of this consolidation. It also allows for cross population of applications across department boundaries. This is a way of breaking down department barriers
 – more companies are treating database as an app that needs HA. More customers looking at clustering/RAC/grids, automated upgrade and automated troubleshooting. Databases are starting to be treated as 24×7 applicaitons so patching and upgrade plans need to be in place
 – need for higher performance and larger workloads. We are heading towards cache databases. In the next four to five years we will see databases residing in cache memory with a connection to another system that is the cache store residing on disk. Large training, manufacturing, and insurance verticals are starting to do this. About 30% corporations will have cache databases by 2012.
 – increased demand for information management and data sharing. There is a trend to get unstructured data into a database.Files do not give you good data management. Files will start to move into databases. 25% of companies are storing XML in databases. This is up from 7% five years ago. Things like videos, audio, and images will start to move into databases.

In 1997, DBAs spent most of their time installing, upgrades(35%), and patching or performance and troubleshooting (45%) backup and recovery (7%), load/unload (5%) security (3%), licensing and training (5%). These numbers hav shifted in 2007. Install, upgrade and patching (47%), performance runing (25%), backup/recovery (9%), load/unload (7%), security (7%), licensing and training (5%). Major database releases are coming out every 4 years.

Patching and upgrades will increase through 2009. After that it will decline and automation will solve this to make it less complex and take less time. Performance and tuning requirements are declining. The database is automating most of these functions and require less tuning. Security will continue to grow through 2012 and tail off after that. There are too many options available and they are not simple solutions. Self securing databases and integration outside the database will help after this. High availability and disaster recoverfy is substantially improving. This is getting simpler and it has become point and click for many instances.  Installation has gotten easier and will continue to get easier. It is the least challenging to most enterprises.  Backup and recovery  will get  more comples as the database sizes grow beyond a TB. It has also gotten easier for incremental but for complete backups will require some innovations.

The current database to DBA ration is currently 24:1. This has been a linear growth from 1990 where it was 10:1. This sould start to increase to about 30:1 in 2010 but the metric will shift to a data size per DBA and not instance per DBA. The ratio will be about 2TB:1 DBA in 2010.  This number is increasing because the database versions start reducing the tuning requirements and automate some of these functions. Automation and self managing are the keys to these improvements. Enterprises using higher degree of automation, tools, and Enterprise Manager can shift this ratio up by about 50%. The current ratio is 38:1 with OEM, tools, and automation. The data ratio incrases to about 4TB/DBA as well.

The focus on database management is shifting from managing individual databases to managing pools of databases and having all of the instances look the same. Thresholds on all of the instances are typically the same. It is easier to set these parameters the same on the majority of the databases. The exception is to have instances configured and tuned differently rather than have each one unique.

Automation helps minimize human efforts. Troubleshooting and diagnostics become proactive tools instead of reactive tools. Change management minimizes the steps and reduces problems or issues in production. centralized, policy driven databases make administration easier. On average automation improves DBA production by 20% or more. It minimizes human errors by 25% or more. Application performance is improved by 10% of more. CPU usage is reduced by 15% or more and defers hardware upgrades.

Key objectives for companies for using OEM
 – reduce cost and improve DBA productivity
 – reduce capital spending for servers and maintenance
 – make DBA perform more value added advisory and strategic services and less basic administration, integrate them more into the business and less into IT
 – wanted to centralize administration to monitor alerts so problems could be resolved quickly.
 – provide an HA platform architecture

sample company based on companies talked to
 – $1.5B in revenue
 – 120+ server running Linux, Unix and Windows
 – one major data center and two regional center
 – > 150 databases running 9i and 10g.
 – too many custom scripts written and monitored by DBAs. This consumed a large amount of the DBA’s time for development and troubleshooting
productivity savings – $509K in DBA labor over three years from using OEM with Diagnostic and Tuning pack.

Diagnostic pack gives performance analysis, hostoric and trend analysis, defined thresholds, sends and receives notification or alerts for critical issues. This helps diagnose issues quickly and focus resources quicker.

Tuning pack helps find poorly performing SQL caused by missing index, bad execution plans, poorly formed statements. It also helps to optimize indexes and materialized views.

 – increased DBA productivity
 – reduction in system downtime due to alerts
 – 20% reduction in capital spending on servers
 100% ROI over a 3 years period and a 16 month payback period

 – need for doing more with less
 – DBA automation can help everyone
 – new DBMS versions can improve overall manageability. Should consider upgrade 12-18 months after a new release
 – OEM can help and focus on centralized administration

Managing Identities with Server Chaining and SmartBadges

discussion from Chevron at OpenWorld

Roger Raj – Oracle
Roy DeRenzi – Chevron

project connects badge with pin code into the database and identity server. The technology behind this is how to integrate the AD server, Applicaiton Server, and JD Edwards, and other Oracle app technology.

Most applications web based. Global IDs and Desktop Ids
SmartBadge is a common access card. It contains a Java Chip that has the user credentials. It authenticates you locally without passing passwords over the network. It also has the ability to lock/destroy itself if multiple passwords fail. The idea is to authenticate against the card and the card connects to Kerberos without passing the password across the network.

Server chaining  has identities stored in one server and referred to by another server. User entries are not replicated. The request is redirected to the other auth server. Oracle currently supports redirection to LDAP and Active Directory. It is available in the OID 10.1.4 and higher versions

Chevron uses AD for user provisioning and auth/authz. Smart badges presents a user certificate and passes the cert across the network. Kerberos is used for all IT services and tickets are passed between applications.

The unique part of this solution is that the Oracle 10gAS SSO server is linked into the Kerberos server and the ticket exchange is done with the AD server. The user is logged in to the web env without having to authenticate into the service.

Four major OID servers. One in US is master, others are primary for the site but replicas of the master. Each of the primary OID servers locally sync with the local AD server.

The mapping of an AD entry to an OID entry was relatively easy. There were some challenges and did require some sophisticated search filters to restrict the search to ensure that all of the information was not brought across with a user search.

Issues with sync – network bandwith caused backlogs in updates. Group entry changes caused a flurry of transactions in a short window. The capacity of the AD server lookups was not adequate at peak loads.

Server chaining allowed the directory to be virtualized and entries did not need to be replicated between AD and OID. The OID was populated with a reference. This did put more of a load on the AD server for simple queries but not for large search queries.

When a change occurs in AD, a reference is pushed out to the OID server and not the actual data. This was challenging at first but worked better once it was deployed. By default chaining is disabled. When you enable it it allows for bind, compare, modify, and search LDAP functions. Nothing needs to be changed in AD to make this happen.

Summary, two different options, Server chaining or Directory Integration Process. One or both can be used. Chevron used server chaining to reduce replication and delays involved in walking through the AD change logs (which include login events).

A good example of where the DIP component is needed is where a portal function is required that needs more attributes in the LDAP server where AD can not or does not want to store these attributes. In this example, DIP is required and chaining can not be used.

This solution does require alternate solutions for lost badges, badges left at home and temporary access for the day, and charge backs to the departments for help desk support.