upgrade reasons

The big discussion that I get drawn into on a regular basis is “why upgrade”. Why should someone go from something that is working fine to months of effort and potential cost? What is the value in change? Why do something different? In this blog entry, I will try to address these two questions.

Let’s start with a baseline. Steven Chang does a good job in his blog describing premium support, extended support, and sustaining support. To summarize here, premium support is what you want. extended support will cost you 20% more in support cost, and sustaining support is what you want to avoid. Not to say that sustaining support is that bad, it just costs more and locks you into a really old system with little or no changes. If congress changes something like the time zone again, sustaining support might or might not provide a patch to fix the issue. Given that all countries are now looking at changing time zones on a regular basis to save tax payers money, sustaining support might cause problems if you are operating in other countries. No one really expects a patch set for 11.5.8 given that it was put into sustaining support over 10 years ago. No one really expects Microsoft to patch Windows 3.1 or the original version of Word.

The current releases that are supported are EBS 11.5.10 until Nov 2011 in premium support and EBS 12.1 until May 2014. Given that EBS 11.5.9 was released 7 years ago in Jun 2003 and entered premier support in Jun 2008, extended support will not be available for this release. If you choose to stay on 11.5.10 after Nov 2010, you have until Nov 2013 when extended support ends. Note that all are supported in sustaining support for an additional charge but it quickly looses value the longer you use it.

The databases that are supported for EBS are through Nov 2011, until Aug 2012, and 11.2 until Jan 2015. Extended support for 10.2 ends in Jul 2013, 11.1 in Aug 2015, and 11.2 in Jan 2018. Note that the database support timelines do not sync up with the EBS release dates.

Basically what it comes down to is that the 11.5.10 version of EBS is about 6 years old and 10.2 of the database is 5 years old. Technologies have significantly changed in 5 years. Laws have changed. Processes and procedures have changed. The way that businesses do business have changed. To compensate for these changed you typically have customized you instance and made changes to your policies and procedures. Getting information out of this system is also aged and out of date. Reading data from a database is not necessarily the right thing to do now. Many vendors have built reporting tools for EBS 11.5.9 and 11.5.10 that are also aging and do not integrate well with other software. New trends like integration of manufacturing, inventory, and cost of labor to get cost of goods sold are not possible.

There are a variety of reasons to upgrade to EBS 12.1. We will not go into them all here because they vary based on what package you are using. The biggest value that I hear is multinational general ledger support. Instead of manually converting currencies into a common currency and converting the general ledger of one country into a line item of a central general ledger, each business unit or operating company can become a part of the general ledger with drill down capabilities that bubble up into a master ledger. I realize that there are many other advantages. You can find them in the reasons to upgrade link.

The major components that are under EBS are the technology components. These components are:

  • Application Server
  • Database Server
  • Java Virtual Machine
  • Operating System

The processors that were available in 2005 were single core 3Ghz Pentium processors. These chips had 300M transistors on 65nm. The dual core system was introduced but not widely used at the time because most operating systems could not take advantage of multiple core systems. Quad core was introduced in 2007 and had a transistor count of 800M on 45nm line widths. The newest Intel processors operate at 3.6Ghz and have double the amount of cache with more threads and parallel processing.

According to the upgrade guide, the first things that you need to do are stabilize your hardware, operating system, and database. Since the Oracle database runs on a variety of hardware and operating systems, you should select what you staff is comfortable with. It is important to realize that although Oracle does not “favor” one release over another, the release schedules are different for each operating system.

The minimum database that is recommended for EBS 12.1 is If you are not on this version, you should upgrade to this first. This will give you until Jul 2013 until you have to upgrade the database again. You can also upgrade to or 11.2. This will give you five or eight years before you have to upgrade again.

The difficult questions that need to be asked are with the application server. Do you upgrade to the latest version or migrate to WebLogic Server. Given that iAS 10gR2 was initially released in 2005 and goes to extended support in 2011, you should upgrade to 10gR3 or 11gR1. The 11gR1 release was in 2009 and goes to extended support in 2014. The safe bet would be to migrate to WebLogic but this will require retraining of your staff and changing the way that you deploy applications and test instances.

Next entry, looking at upgrading the database and different ways of getting to a newer version.

upgrade strategies

It seems that there has been a bunch of IT shops that are suddenly motivated in upgrading their EBS 11i environment to 12i. Interesting trend but unusual that it is happening all at once. In the next few weeks I will be blogging on upgrade strategies, best practices, recommendations, what to avoid, and what to look for. This will not be the definitive location for everything but a reference point on how to. Having been an admin in a past life, I realize that finding the right information is 80% of the solution. My hope is to provide a launching point to find other information.

If you do a Google search on EBS R12 upgrade, you find the following references:

What surprises me is that these are relatively interesting links but not the ones that I would want if doing an upgrade. I personally think that the following links are more relevant:

Surprisingly, Amazon does not have many books on upgrades. If you search for Oracle EBS Upgrades, it returns database upgrades for 8i to 9i. Not very useful or even relevant. You also get back a bunch of links to 11i EBS features like General Ledger, Self Service, or Financials. That tells me that either there are not many books out there or people are interested in parts and pieces of EBS and are not looking at it as a whole package.

Next post, look at the recommended best practices, first steps in performing an upgrade, additional requirements for R12.

software mapping between Oracle and Sun

Now that the new has been digested that Oracle really did purchase Sun Microsystems, let’s take a qualitative look at the software available from Sun. Starting with the software index page we see a list of 10 major components.

1) Operating Systems. This is Solaris for Sparc and Solaris for Intel. It also includes OpenSolaris. All of these are really one in the same but different ports. OpenSolaris is the source code distribution similar to the Linux distribution model. Solaris for Sparc is a port to the Sun hardware platform and comes in 32 and 64 bit flavors. This is the tried and true operating system of the data center. This does change the relationship with Microsoft since Oracle runs on Windows. It also changes the relationship with companies like SAP and IBM since a large part of their software suite also runs on Solaris and Java.

2) Virtualization. This is the Sun xVM Ops Center, the SunxVM Server, and the Sun VirtualBox. This is a little different from the OracleVM solution in that it only supports Linux environments whereas OracleVM support Windows as well. The Sun solution supports Windows but they do not actively advertise the fact and promote it. Both are based on the Xen Hypervisor and it will be interesting to see how the two products merge into one moving forward. The Sun xVM Ops Center which is used to manage Containers and virtual domains will probably be standalone for a while and merged with Enterprise Manager just like the OracleVM console is being integrated.

3) Java. There is very little overlap in these technologies. Oracle does own the JRocket technology which is a highly optimized virtual machine. Given that the middleware layer and application suites are all built based on Java, I see this moving forward as a central part of the Oracle strategy. It will be interesting to see how the OpenJDK community and Java community changes moving forward. Microsoft has been in and out of lawsuits with Sun over this issue. It will be interesting to see if it will be any different with Oracle.

4) Mobile Solutions. This is mainly Java ME and JavaFX Mobile which is directly targeted at the telco industry and portable devices. It will be interesting to see if Oracle develops and end to end solution for the utility and cell phone industry. It is in a unique position to take advantage of things like smart metering and smart cars with the technology that it now owns.

5) Infrastructure. This includes GlassFish, Identity Management, SOA, and software development tools. GlassFish is an interesting platform that provides a different look and feel for user interaction with applications. This falls in line with the enterprise 2.0 initiative that Oracle has. It will be interesting to see how this integrates with the existing WebLogic platform as well as the existing SOA and BPM products. The identity products will have to be merged at some point since they play in the same space and compete head to head on a regular basis.

6) Database. MySQL will be just another database that falls in line with BerkeleyDB, TimesTen, and Essbase. There will be places for it to be used. The JavaDB is also an interesting technology that has application in the embedded space.

7) StarOffice. It will be interesting to see if Oracle does anything with StarOffice. It opens a new possibility of integration of the Crystal Ball technology with access to the spreadsheet source code. The merger of these two technologies could be very interesting and could integrate well with enterprise offerings and analytic tools like Hyperion and Siebel.

8) System Management. The Sun Management Center, Sun Connection, and Sun N1 Service Provisioning Systems are all very interesting. They are enterprise class dashboards for operating a data center. It will be interesting to see how Enterprise Manager is merged with this product and how pricing will change based on the packaging of the two suites of products.

9) Developer Tools. Sun Studio 12 and NetBeans are very interesting technologies. Given that Oracle has JDeveloper and that there is truly no revenue stream for any of these products, it will be interesting to see how things move forward with either merged products or separate for different applications.

10) High Performance Computing. Given that Oracle has nothing in this space, it will be interesting to see how this migrates and develops moving forward. This was the crown jewels ten years ago at Sun and it was used to create some of the largest computers in the world. It will be interesting to see if this will remain a focus of Oracle. It is a slight divergent from what Oracle currently is but it makes sense to merge high performance processing with high performance database and storage technologies.

11) Collaboration. Given that very few people use the Sun calendar and messaging servers and Oracle just released Beehive, I can see a merge of these technologies quickly and easily.

It will be interesting moving forward

Cloud computing

Great, just what we need. Another term that is a marketing buzz saw. Unfortunately, this one makes sense. Cloud Computing. First there was parallel processing. Then there was grid computing which is a very confusing term.

Oracle and Amazon announced yesterday see” also

What does this really mean? It means that you can have a static ip address allocated to some processors and memory hosted by Amazon using preconfigured images created by Oracle. It also means that you can request a higher level of service as needed. More processor, more storage, and more memory by requesting a new service or expanded service. Oracle has agreed to shorter terms for licensing if you are testing or trying out services. Licensing can go from one year to five years if you choose to lease the service with a one year price coming in at 20% of the list price.

The cost is the part that appears to be the interesting part. The cost of a processing unit is $0.10 for a small instance upto $0.80 for a large instance per hour. This ranges from $72/month ($864/year) to $576/month ($6912/year) to cover the cost of the processor and memory. If you want to back up your database into persistent storage you will need to pay $253/month ($3042/year). What this means is that for a small startup, the cost of deploying an Oracle database will work out to be

– $864/year hardware cost
– $1500/year storage cost (800G of storage for OS, Database, and data)
– $3500/year for the database (one year lease Standard Edition)

This is pretty amazing considering the cost of configuring the same system to run Windows and SQL Server or Linux and MySQL. You truly don’t need a DBA or Unix admin full time on these systems so you can assume that you can outsource these at $1K/day for roughly one day a month. This brings the grand total to run a system to $30K with $24K of that being people to support the system. At this point the labor cost is roughly 4x that of the hardware and software acquisition and management cost.

For a small company this is significant. This is roughly what many spend on advertising and can provide an alternative way of communicating with customers. Think of a doctors office or a small law firm. Think of a car dealership or a small chain or restaurants. It allows the businesses to create a one on one relationship at all times of the day with their customers.

Interesting announcement and great possibilities.

more later…..

pidgin – part of beehive

Internal to Oracle we are in process of updating the way that we collaborate. For years we have used jabber internally and even got a mandate that we had to use jabber for internal chats. I completely understand this. Google Talk and AIM are insecure protocols that can be snooped once you leave your firewall and both leave the firewall and come back in if you are chatting with someone in the next cube. It make sense. This week we were all given the chance to start using the IM component of the corporate Beehive rollout. We will get the mail part of it later this year because the executives and corp users are going to this now. They want to make sure it works before they roll it out to the masses. This is different from most development strategies. Have the people who can direct you to fix and issue test it for you and not let the masses or typical end users help debug the issues.

The newest component that we are testing is the IM client and server. The client is called Pidgin and information can be found at http://download.oracle.com/docs/cd/E10534_01/bh.100/e12034/toc.htm. Beehive information at http://download.oracle.com/docs/cd/E10534_01/bh.100/e05393/toc.htm. The client allows you to connect to multiple servers and unify the chat window, similar to what trillian did years ago. The nice thing about this is that it does allow me to list people in different groups and see them from different perspectives. The bad thing is that I see the same person multiple ways. I might see a co-worker on the Jabber server, on the new Beehive server, and on Google Talk. I can hide these or alias these if I want. When we shut off the internal Jabber server, it should solve a lot of these issues. I don’t mind having someone show up in my Houston list of people that I share office space and Buddies list (with their AIM account).

One of the features that I really like is the ability to annotate or add notes to the contact. If someone comes up as IheartLyleLovett I can change the label to tell me who this person really is. I can also add notes to the person as well. Things like email address, phone numbers, kids names, what we talked about last time can be added to the notes.

The note in the document http://download.oracle.com/docs/cd/E10534_01/bh.100/e12034/mobileclient.htm#BABIDECI that talks about an IM client on your phone is a little misleading. The previous section is talking about iPhone integration with Beehive. The next section talks about an IM client for your device. When you follow the link the IM client software is not available for the iPhone.

more later….

build vs buy

The question of building an application vs buying an application does not come up very often but when it does it is a difficult conversation to have. I would understand if the discussion were public domain vs commercial products. That is a discussion that I have on a regular basis. I got some training on Oracle Web Center the other day and kept comparing it to uPortal. Yes there are differences. Yes one is a commercial product and the other is public domain. Yes there is value in both. The argument then comes down to dollars and training. When I was at Texas A&M I deployed a prototype of uPortal. It was more of a political fight than it was a technical challenge. We also deployed the Yale CAS server for single sign on. It was a relatively easy solution and required very little political battles. It mainly required a mandate from the university that we would no longer ship out password files and would restrict who could connect to the LDAP and Kerberos servers. This was an easy one. It increased security of all services on campus while increasing security of existing services.

I was at a customer the other day and they were talking about writing their own hardening solution for identity. They wanted to write a solution that presents a custom image or challenge word embedded in the html to prevent a man in the middle attack. This technology is used by many of the larger banks because it has been mandated for financial data. They want to use the technology for human resources data. It makes sense because they need to protect social security numbers.

What didn’t make sense was that they wanted to build their own solution for this rather than purchase one that already exists. The technology isn’t complex. It does require some java or asp code, a database, and a way of injecting the image into the authentication screen. This is effectively what CAS does without the custom images. It would be a simple step to change CAS to support the changing images or pass phrases but challenging to present a floating keypad or keyboard. Oracle provides this with the Adaptive Authentication Manager. This product provides the floating keyboard, challenge questions, and custom images as well as a risk analysis tool. I don’t want to get into the detail of the product because you can find it yourself.

My question is how do you justify building something or buying something. If the product will cost you on the order of $100K (which I have no clue how much it does cost). How many programmers does this translate to and how much support cost is required to reproduce something like it. If we look at a parallel, if a car cost $50K, how many mechanics would it take to get a car from the junk yard and build you a new one or build one from scratch from a kit. When was the last time you saw a kit car or kit airplane? I see a bunch of custom homes and spec homes being built but the vast majority are as is with customizations. I think software is similar to this.

In doing some research on the cost of software and how much a developer can produce on a daily basis, the numbers are difficult to pin down. They range from $20-$100 per line of code to 15-40 lines of code generated per day. If we look at the CAS code, it has about 50K lines of code. This suggests that to develop this software it would cost $1M conservatively  and take about a thousand days. You can parallelize this and assign three or four people to this and reduce it to 250 days. This says that in a year you could re-write the CAS code from scratch and come out with a production quality supported package. Alternatively, you could spend $50K and assign a full time staff person for a year to test, implement, integrate, and deploy this system into your production environment.

It makes sense to me that buying is the way to go. Unfortunately, I am on the vendor side and am having trouble seeing the value in building my own software, or car, or computer, or phone system, or bicycle from scrap parts. I guess I have been away from the university too long…..

New VM Hardware and renewed effort

After weeks of waiting and pain in backing up my laptop, I got a new Dell Latitude D630. This system has the new virtualization hardware so that I can do a HVM (hardware virtualized machine). I tried doing this with a Dell D620 but the hardware did not support virtualization.

1) backup the existing laptop. I wanted to make sure I didn’t loose the WindowsXP image since they are getting harder and harder to find these days.

2) remove the disk that is in the laptop and put in another disk. I really believe in backing up an image that works. I didn’t want to scratch anything that works because getting it to work on the corp network is a real pain.

3) boot the laptop into the BIOS and turn on vitrualization. By default it is turned off. It was a little difficult to find but it was under the system settings and was clearly labeled virtualization. I had to walk through all options to find it.

4) boot the laptop from the VM Server cd. This allows me to configure and start the VM server on the laptop. The system comes up in a Linux kernel with no X Window operating system. The command line login: is the only friendly prompt presented.

5) configure the VM Master to recognize the VM Server as a system that I can manipulate. I also made this system the pool master and utility server. I typically would have made another machine this but given that I only have two laptops I did what I could. In retrospect, it would have been nicer to have the VM Master be the utility and pool master but that isn’t possible since the pool master has to be part of the pool. It would have been nice to make the VM Master the utility server as well.

6) download the HVM and PVM templates from the otn.oracle.com site. I downloaded all of them so that I could play with them.

7) download the Linux iso files so that I can play with installing and configuring virtual machines from scratch if needed and create my own templates

8) convert a Windows installation CD into an iso so that I can boot from it as well. I did this on the VM Server by using the following command
$ dd if=/dev/cdrom of=WinXP.iso
When the command finishes I have a bootable copy of the Windows XP CD and just need to keep the product key handy for the installation.

9) Just for grins I also downloaded the Solaris 10 x86 media and the Knoppix 5.1.1 media so that I can test different operating systems.

10) Once I had all the software downloaded, I had to copy it to the templates and iso images to my VM Master. The configuration that I have is as follows

home computer ———> hardware router  ——-> internet connection
       VM Master —————
               |                                |
               —–>wireless hub    |
                              ^                |
                               |                |
       VM Server —-                |
               |                                |

The ip address used by the home computer is The VM Master is configured to be dhcp on the wireless hub and appears as I wanted the VM Server to also work on the wireless hub but the installation did not see the wireless network when I installed the vm software. I configured the VM Master on the hardware connection to be at and the VM Server to be at These addresses need to be static and I wanted to see if I could use both the static and wireless interfaces on two different networks. The benefit of this is that I should be able to use the hardwired network for private communications and the wireless network for internet connection. The drawback to this configuration is that the VM Server did not recognize the wireless network so any communications to this system had to happen through the VM Master. I think that this is a problem that I can solve but I have not spent the required time to get it working. I have gotten spoiled with the GUI interfaces in Linux and forgot how to plumb a network interface and enable it from the command line. I think that I will try this after I get a few operating systems installed and working.

11) now that I have the iso images and templates on the VM Master, I need to configure the images to work on the VM Server. To do this I go into the console (http://localhost:8888/OVS) and login as admin. I created an account for myself and gave the account admin privs but I do not see the resources or approvals listings for images imported. I need to research this more because I do not want to have everything go through the admin account for iso and template approvals. To get the images to the VM Server I either need to ftp the images there, configure the VM Master to be an http server, or configure the VM Master to be an ftp server. I copied some of the images to the /OVS/iso_pool and /OVS/seed_pool directories so that I could import a local ISO or local template. This is done by going to the Resources tab, selecting the ISO Files sub-tab and clicking the import button. You get the option of External ISO or Internal ISO. If you pick internal, it looks in the directory structure of the /OVS/iso_pool and picks a directory there to import. If there are no new directories that have not already been imported, nothing will be shown to you. When you select internal iso you get a screen that shows Server Pool Name, ISO Group, ISO Label, and Description. It is important to note that the order of selection is critical. You can not select an ISO label before you have selected a pool name and ISO group. You need to select the Pool Name first. When you pull down this menu, it shows all of the pool servers that you have configured. In my case, there is one option, VMServer-1. Once I select this, the ISO Group pull down gets populated. If there are no new directories or iso files, you are not presented with an option and need to logout and log back in to clear the logic. This is a little crazy but it works. If you cancel, the cookies and scripting mechanism do not get properly cleared. If there are new iso directories and iso files in the iso_pool directory, the ISO Group label will reflect the directory name. You can easily test this by typing the following commands
$ cd /OVS/iso_pool
$ mkdir test
$ cd test
$ touch test.iso
you go into the ISO Internal import you should see the VMServer-1 as the server pool name, test as the iso group and test.iso as the iso label. The directory that we created, test, is effectively the ISO group. The file that we created, test.iso, is the ISO label that we create. We can create more files in this directory and associate them with the iso group. This is typically the case when you download four or five disks that make up a Linux distribution. If you can download a DVD image of the distribution, you will typically only have one file associated with an iso group. If there are multiples, you need to associate multiple iso labels to the iso group.

12) Now that we have a successful iso defined with a local import, let’s look at importing from an external iso. To test this, I configure the vmmaster to be an http server. To do this I configure the apache server that is installed with a typical Linux installation. I had to rename the /etc/httpd/conf.d/ssl.conf file to /etc/httpd/conf.d/ssl.conf.orig to disable the ssl connection. I didn’t want to hassle with certificate configuration and just wanted to get a simple http server up and running. I then created a link in the /var/www/html page to point to the directory that has all of the iso images. This was done with a ln -s /oracle/vm /var/www/html/vm. It is important to note that this worked because the /var and the /oracle directories are on the same disk. If they were not I would have to create a hard link or copy the files to the /var disk. Once I got this configured I was able to start the http server by executing the command
$ mv /etc/httpd/conf.d/ssl.conf /etc/httpd/conf.d/ssl.conf.orig
$ ln -s /oracle/vm /var/www/html/vm
$ sudo /etc/rc3.d/K15httpd start
The start responds with an error or an OK. The OK shows that the apache server is up and running. At this point I can go to a web browser on my home computer or on the vmmaster and goto http://vmmaster/vm and see a directory listing of the iso images.

13) Once I have the iso images available from http, I then go back to the external iso import. This menu option presents a server pool name, an iso group, an iso label, a url, and a description. The pool name is a pull down and only gives us the option of VMServer-1 since we have only one pool server. The ISO Group, ISO Label, and URL are text fields that we need to enter. I executed the following commands on the vm master to configure a dummy iso file as we did earlier on the vm server.
$ cd /oracle/vm
$ mkdir junk
$ touch junk/junk.iso
At this point we should be able to see the iso file by going to http://vmmaster/vm/junk and see the file of zero length. Just as a test, let’s enter test2 as the ISO group and test2 as the ISO label using http://vmmaster/vm/junk/junk.iso as the URL. When I do this I get a confirmation screen and get routed back to the list of iso files. In this wel see the ISO Label of test2, ISO group of test2, File Name of test2.iso, Status of Importing. When we refresh we see a status of Pending. We can approve this by clicking on the radio button to the left of the ISO label and selecting the Approve button above it. Once we approve it we see the Status of active. The directory structure created on the vm server is /OVS/iso_pool/test2 with a file containing test2.iso. It is important to note that even though we labeled the directory junk and file junk.iso on the vm master, the file comes across as test2 and test2.iso on the vm server. I was a little surprised by the external import function. I thought it would just register the location of the iso file and allow me to use it across the network. This is not the case. It actually copies the file from the URL location and copies it into the iso_pool directory. I would have preferred that it left it on the other server and left room for the vm images on the vm server.

14) Now that I have a proper iso image, I can create a virtual machine. This is done by going to the Virtual Machines tab and clicking on the Create Virtual Machine button at the top right. You are given the choice of installing from a template or media. We will choose to install via the media. The machine will go into the VMServer-1 server pool (since this is the only one that we have). We will select an iso file to boot from. In this instance I select the WinXP.iso file just to see if the Windows media installs properly. I also have the option of installing a fully virtualized vm or a paravirtualized machine. For Windows I have to pick the fully virtualized. The next screen allows us to name out vm, allocate memory, cpu, and disk as well as setting our console password. For this example I will create a 1 CPU system with 1G of memory and 8G of disk. Before I create the image, I make sure that I have 8G of disk space in the /OVS directory on the vm server. The file will be created in the /OVS/running_pool with the name I specify prefixed with a number. This number is generated by the vm master to uniquely identify an image just in case someone picks a previously used name. In this example I pick the Virtual Machine Name of test01, Operating System of Other, 1 CPU, 1024 of memory, 8096 of disk, and set the password to something I can remember. I also check the network interface card presented. In my case there is one and only one choice so it does not matter. Once I confirm this I get dropped back in the Virtual machines tab and have a status of Creating for the new machine that I created. This creates activity on the vmserver constructing the /OVS/running_pool/84_test01/System.img and /OVS/running_pool/84_test01/vm.cfg files. The System.img file is the disk for the virtual machine. Everything is kept in this file can can be moved from system to system as needed to run the vm on another hardware box. The vm.cfg file defines the operating parameters for the virtual machine including the device modek, disk location, kernel loader, and memory to allocate. It also associates a network interface and vnc parameters for remote console. The vncpassword is stored clear text in this file so it is important to make sure that your root password is not the same or a derivative of this password. The generation takes a while and when it is finished some audit logs are displayed to the vm server console and the status changes from  creating to Running.

15) Now that I have a running virtual machine, I need to connect to the console. To do this I use the browser on the vmmaster. In the Virtual Machines tab I select the test01 vm and click on the Console button. When I first tried this on the VM Master box, I got a plug-in required. I tried going to download the plugin but did not find one. From the documentation I realized that I needed to configure the Firefox browser with the proper plugin and restart the browser
# cp /opt/ovm-console/etc/mozpluggerrc /etc/
# cp /opt/ovm-console/bin/* /usr/bin
# cp /opt/ovm-console/lib/mozilla/plugins/ovm-console-mozplugger.so
I had to use the /home/oracle/firefox directory because I linked this installation to the browser launch button. I did not want to use the default browser that comes with Linux and preferred to use FireFox which I downloaded and installed separately. I tried to do the same thing from my home desktop which is running WinXP but the connection failed because I did not have a good java installation. I decided to work on this later and not work on it now.

16) once I got the console up and running (and the right password entered), I saw the boot screen for Windows XP. Everything worked fine until it came time to reboot the first time. The system went through all of the questions and selections and started to reboot. When this happened, something went wrong with the console connection. The vm never rebooted properly and I had to halt the vm, disconnect from the console, and restart the vm. I would have liked to keep the console up and running and watch the vm restart but I could not. If I have the console up and shutdown the virtual machine, the console shows that it was showing before the shutdown. I woul
d have expected to see the shutdown procedure or a message that the virtual machine was down but it remained a static image of what happened before the shutdown. To fix this I had to exit the console, restart the virtual machine, and restart the console. This isn’t a big issue, just something that I didn’t expect.

17) Once I got everything running and the way I wanted after a reboot I was able to shutdown the windows image into hibernation state as well as power off state. The resume from hibernation took me to the Windows login as expected. If I select the Turn Off Computer from Windows I see the Windows is shutting down note and from the vm master it appears that it is still running properly. I never see the shutdown or see the virtual machine go to a power off condition. This is a little disconcerning because I don’t know when it is ok to power off the machine. If windows is still saving state, I might loose something with a power off. It would be nice if the operating system power off would signal something to the vm master and put the machine in a power off state. This might be caused by the vm.cfg file parameters. I currently have on_crash and on_reboot set to restart. I might need to set them to something else or set another parameter to something different to power off the vm.

18) I was also able to delete the vm image once it was powered down. If the image is running (even though the OS is shutdown) I am not give the option of delete. I can only power off or launch the console. If I shutdown Windows, exit the console, and try to relaunch the console I get a Connection to vnc server failed: Connection refused. This is a good indication that Windows has finished but something more elegant would be appreciated. At this point I can power on the vm, delete it, clone it, save it as a template. If I try to do something like pause a powered off vm, I get an error message that the Virtual machine is not ready to be paused. I can clone the powered off vm. I just have to enter a new name, the number of copies, the pool server, and the group name that I want to copy it to. I see this being something very useful for development. I can work on something and create a golden master of a release. I can then close the os and application installation and branch the image for other people. That way we don’t corrupt each others test environments. In my instance I choose to create a virtual machine name: test020 with one copy into group name test.When I get back to the Virtual Machine page I see that winxp_test is in a state of cloning and test020 is in a state of creating. If I look on the vm server I see /OVS/running_pool/86_test020 being populated with a System.img and vm.cfg file. Once the audit record is displayed on the screen I can see that both images are in a powered off status. This process takes a while because the 8G System.img file is copied to the new directory.

more later…..

what the heck is a data warehouse

Gartner says that data warehouse is the next big thing. Oracle is big into this space. It is something that many customers are interested in. It begs the questions, what the heck is data warehouse and why should you care?

The key words that are usually associated with data warehouse are consolidation, business intelligence, single server, data mining, and analytics. What I can gather is that data warehousing is a mechanism for gathering key metrics about the business on one server so that you can analyze trends, changes, or requirements of your business. Isn’t this what your database does for you anyway? Well, not really. You use the database to store HR data for payroll and benefits. You use a different instance to store inventory and accounts reveivable. Yet another for manufacturing or shipping. Some of these are transactional in nature with entries happening every minute. Others are batch operations to generate paychecks every week, two weeks, or once a month.

Unfortunately, most companies have different departments that manage this data. Accounting and finance deal with things money related. Manufacturing deals with inventory, catalogs, and part lists. HR deals with people in the company. Sales deals with customer information and retail web sites. Data warehouse tries to integrate these departments and aggregate much of the data so that an executive can look at things like marketing defects and customer satisfication issues as well as inventory turns and the cost of shipping and storing parts. If our company, for example, generates plastic parts for our product and that product is starting to have a higher rate of returns and customer complaints, it might be of benefit to look at our suppliers for plastic as well as our die machine that molds the plastic parts. If the die machine is old or does not work properly with a new additive from our supplier, we need to find this out. The inventory and parts list repository will not show this.

My understanding is that you can currently create a star schema that crosses many tables in one database instance. This allows you to create materialized views that cross department boundaries. Most modern databases can cache and optimize these views. When data is updated in your inventory system, theoretically it is copies to your data warehouse and the materialized view that is in your data warehouse is located is updated. This does not negatively impact your inventory system other than copying the data to a remote system. I dont’ truly understand the need for data cubes and how they apply to this technology. It is something that I need to do more research on.

Data mining is another concept that comes up as part of a warehouse. It is typically said in conjunction with modeling and predictive analytics. Regression analysis and linear models appear to be important as well as deviation from predictions. I remember some of this from control systems but I am sure that the technology on keeping a pendilum swinging upside down is different than that of keeping inventory minimized and product quality maximized.

ETL is another feature that seems to be important. This piece typically consists of a point to point connection to a data stream and data translation in some way. If something changes in your mainframe, it is copied into your warehouse and converted from the language that it is stored into a format that works in a database. It also has the ability to convert data from one representation to another. For example, sales in Europe might be stored in Euros. If we want to aggregate data into a warehouse we might want to standardize on US Dollars as it is copied.

The biggest problems that I have seen for building a data warehouse are not technical but political. The key issues are around funding, data ownership, and data access. No one wants to fund something for the whole company out of their budget. Since IT is typically a cost center and not a revenue center, it is difficult for IT to being this type of initiative. What usually happens is that you get a mini-warehouse between two departments or data shared in a spreadsheet that is not available to anyone else. Some times you do see an analytics tool that runs on top of a manufacturing system but only runs at night since the system is tuned for high rate of transactions during the day and not batch operations. Mixing the two has traditionally caused problems with one or both functions.

What I need to figure out moving forward with this is:
1) how do cubes relate to data warehousing and what value does it bring.
2) what impact is there on copying data to a data warhouse from my existing system
3) is there a good example that shows a tangible benefit to building a warehouse over building integration between two or three systems
4) who is the end user and a typical use case for a warehouse? Is it just used by the higher level managers or by mid tier managers to run their operations?

why use sharepoint

I was watching a Phil Lee podcast available through PodTechNet. The discussion was about Microsoft SharePoint Collaboration Server. The discussion made me start thinking.

The legacy behind the SharePoint product is that it started as a Site Server and migrated to SharePoint 1.0. Microsoft then merged Front Page into Windows SharePoint Services which morfed into SharePoint Portal Server 2003. The idea is to take the typical tools that everyone uses (Word, Excel, and PowerPoint) and enable users to share these applicaitons as a service collaboratively. This isn’t quite Documentum, FileNet, or the other ECM processes. It is more of a collaboration tool and not a document control mechanism. It does not have the business process integration like a traditional ECM product.

The SharePoint Server 2007 does incorporate the Content Management 2002 server and Office 2007. This is considered to be the money maker for Microsoft. They key to the code is splitting the services into different components. It consists of Collaboration. a Portal, a Search Engine, Content Management Services, Business Analytics, and Business Process Automation. The service layer under these services create things like information lifecycle management of documents, security and access controls, and other things to provide government compliance and data retention regulations.

The concept here is that if a user is a historical Office Suite user, there is an opportunity to upgrade everything to collaborative services and corporate standards with the new Suite and SharePoint combined. These services are also starting to include some basic templates for back office applicaitons like inventory, basic accounting, and HR. These templates are not robust and will probably only work for mom and pop shops.

The demo that was given showed a contracts folder that allows a group of people to create documents with Excel, share these documents through files, or make them available offline through Outlook. The tool also has a discussion group mechanism that allows for a newsgroup style discussion. This service is available as is News offline but also works similar to an instant messenger chat session if you are connected.

The demo showed an example of filing an expense report through a template. This integrates a Forms server on the SharePoint server similar to the Oracle Forms server integrates with an Applicaiton Server. The expense tool showed basic forms.

The way that file sharing is done has changed slightly. Rather than looking at a collection of Powerpoints being dropped into a directory, users can subscribe to a shared channel to view presentations generated from a group that generates collections of presentations. This is a little different from a web page service because it allows for people to be notified when something drops into a channel.

SharePoint is becoming pervasive because many companies own Office 2007 and other related technologies. They typically have at least one SQL Server license, and there isn’t really a good collaboration tool other than email and IM. SharePoint gives you a tool to easily expand use of other tools that users are familiar with.

Some of the drawbacks are that it is easy to cause islands of information and problems when integration of different sharepoint instances merge. The storage structure, topology, and data organization becomes an issue. There really isn’t a sync or offline way of attaching to this. It is also a Windows only solution. There is also a big step in going to Office 2007. Once you go, everyone will have to be able to read the new file storage format. This is an all or nothing step.

This product does have good momentum. It does integrate a variety of tools that people are familiar with.

– end of training video –

In my opinion, this looks like a good tool. I am not a fan of Microsoft products because they typically only work on Windows but that does not seem to be a problem for most corporations. I don’t think it integrates very well into documents that need to be scanned or documents that are not Windows format. The search engine and security mechanisms are not as granular or robust as they need to be. Fortunately for Oracle, we do have some good extension products that we got with the Stellant application that allow us to integrate the SharePoint data and other products like Documentum together. It seems like every customer that we talk to has an ECM solution and it needs to integrate into SharePoint. Many companies are treating it like a web version of Outlook that does collaboration and joint scheduling. Not a bad concept but many of these companies are moving forward with initiatives in different business units to isolate sharing between departments. This is cauing integration and standards problems within their company when they try to aggregate into one SharePoint site.


distributed and high availability

I have had a discussion at a number of customers of how to expand their existing footprint. They are regional companies that have merged or acquired another company. The new organization is typically an international company. The question comes up how to let the users from Singapore, Sydney, or Scotland see the financials, inventory, and hr data without taking forever to perform standard operations.

The first method that I always recommend is to take inventory of what they have and how it is configured and accessed. If you have central servers in a major metropolitan area in the US, make sure that you have high speed access to your intranet. This means something larger than a T1 connection for most companies. Once you have the high speed connection, make sure that you have two data centers. It does not matter where the remote data center is, just make sure that it isn’t in the same building or server room. If you have a site failure because the power company goes down or a storm floods the city, put your alternate data center in another city. If you are looking at supporting operations in Europe, Asia, or China, why not put an alternate data center there? Typical reasons to stop people from putting data centers there are legal, logistical, and political. Some countries do have restrictions on what data can be exported from the country. France, for example, does not let HR data leave the country. Employee data for French citizens must stay in France. If you have a large data center in Asia, you might want to put your data center in cities like Singapore because they have advanced and reliable network infrastructure. Putting it in China using the publically available network might lead to problems or random outages. The same is true for many companies in Africa.

Once you have alternate sites available, make sure that you have two network connections into and out of each site. If one network goes down, the second network connection provides backup. Many companies in the US only have one internet connection because maintaining two high speed connections can be expensive. Many of the services allow you to pay for an all you can eat service where you only pay for the bandwidth that you use. This is an excellent idea but it does not allow for predictable cost. If you generate an event that causes a significant amount of traffic, like a new product release or a new marketing event, your network cost can go through the roof. Many companies pay for a high speed fast pipe like an OC3 or OC12 and supplement this with a variable network connection. The variable network connection is the standby failover line and is only used in event of failure or substantial congestion on the high speed fast pipe.

Once you have the network plumbing working, you need to look at latency between the sites. The latency needs to be guaranteed less than 300ms, typically less than 150ms for the Oracle suite of applications to properly work. Typical network speeds around the world on an uncongested network is on the order of 85ms. Network latency inside a data center is typically on the order of 10ms. If you can fall into the 30 to 50ms range, things will work properly and you typically will not notice that you are at a different data center. When latency jumps to 100ms plus, users will start to complain of slow applicaitons. Realize for each web page interaction there are four to five round trips on the network to transfer the web page. This means that a half a second per web page before it starts to paint. This delay is noticable from the user.

The discussions that I have had recently are how do I split my financials and HR data between two sites. If I have operations in Europe and operations in the US, can I have the European data in London and my US data in Houston? The answer is yes and no. Yes, I can have my Oracle Apps running in both locations. Yes, I can have a database that stores the Europe data on a database server in London. Yes, I can have a database that stores US data in Houston. The big problem is how do I run two financial systems and merge these repositories? How do I convert between Euros and Dollars? How do I report expenses in multiple currencies and convert this data on a time basis based on when I exchange money with my bank? How do I adhere to the Tax laws in the US and the Tax laws in the UK? There is more to consider than just can I store my data in multiple locations.

You can have Oracle Apps running in two different sites and have two different databases in the two sites as well. The number two is not special, this could be twenty or twelve just as easy. The difficulty comes about in merging this data into one corporate view. If, for example, I want to look for a part in our inventory, I want to look first locally then internationally. The search can be done through Oracle Apps (PeopleSoft, Seibel, JD Edwards, Hyperion, etc.) with a regional restriction or with a global search. The regional restriction will look locally. The global search will look in multiple tables or data partitions. You can replicate the European data to the US using DataGuard and the US data to Europe the same way. This will give you two database tables at each site with one table being the primary and the second being a logical or physical standby as well. The standby data can be accessed in read-only mode and changes can be pushed across using a streams interface to the primary. The drawback to this is that 9i and 10g restrict database availability for databases that are opened with read only. The redo apply stalls until the database is closed. The 11g database allows you to have the database open and have the redo apply processed to this standby repository.

The distribution of applicaiton internationally is a complex problem. There are many alternatives or options to make it work. There are many problems and issues that you need to think about beyond simple data synchronization. This is a topic that I will probably visit a few more times.