more on OracleVM

Virtualization technologies

two different approaches

1) server virtualization – you allow multiple operating systems to run on one server (OracleVM, VMWare, IBM LPAR, Sun Domains)

2) server pooling – you aggregate many physical server into a single logical server (RAC, Grid/PVM/Linda)

both work but they address two different problems.

server virtualization

– hardware partitions (physical, dynamic, or shared partitions)

– virtual machines (binary translation, paravirtualization, hardware assisted)

– os level (os partitioning, resource managers)

For
the hardware assisted, Intel and AMD are working at solving the problem
in hardware. We are currently at release 1A of this effort with about
3-4 years to get to the optimum solution. The hardware assist is still
not the fastest in the world. Things like memory swap going between os
images is not very efficient. The hardware wants to swap out a block
the size of the memory management unit controller. Most software
partitions are much smaller than this so the end result is that the
hardware swaps out to access a few blocks then has to swap out again. 

customer value for looking at vm

1) it saves money. server consolidation and data center space/power requirements

2) server rebuild and application load went from 20-40 hours to 15-30 minutes

3) availability – live migration and virtual machine failover

For
live migration, you must have shared storage. It needs to be NAS or SAN
with a good network connection. You need to have the ability to
transfer the memory of server A to server B across the network. To do a
live migration I copy all of the memory from box A to box B while
recording the memory updates on box A. I then repeat this process and
only copy the changed bytes between boxes. At some point I move the
operation from box A to box B. There are some states where the bytes
changing over a period is not converging. The number of bytes that
change can at some times exceed the bandwidth of the network.

For
the VMWare product there are severe restrictions on the computer
requirements when you look at live migration. You can not live migrate from a Xeon to an Opteron
system. It must be between the same chip set with similar
configurations. With Xen, this is not the case. You can live migrate
between boxes because we do not insert hypercode into the instruction
stream but rely upon the paravirtualized operating system or hardware
assist for ring 0 management issues.

What does it mean for an application to support virtualization?

1) static configuration support – it runs in a vm and expects nothing to change

2) dynamic reconfiguration support – the program does not fail if resources change

3)
dynamic reconfiguration aware – the program is designed to recognize
and dynamically adapt to resource changes. (change the number of
parallel threads depending upon the number of cpu resources available
for example)

4) optimized and integrated – the program has been modified to deliver new or optimized capabilities in a vm.  

some
questions to ask are how frequent does change occur? how radical do
they happen? do they fluxuate frequently or gradually over time?

We
currently support the Xen port on the Intel architecture. Xen support
the same software on the Power series chips. Oracle can support this
but there has not been sufficient demand for this to happen. There is
no technical reason why Oracle does not support the Power series chips.

One change that has occurred is that OracleVM support hardware
partitioning for systems. In a configuration file you can restrict the
number of processors that a system can use. This has huge implications
on the Oracle database license cost. If you purchase an 8 CPU system
and create three hard partitions for 2 CPU, 2 CPU, and 4 CPU. If you
run the Oracle database on one of the 2 CPU systems, you only need to
have a 2 CPU license for the database and not the 8 required before.
This is a change in policy that should go up on the web site this month.

We
have seen a slowdown for VMWare running Oracle benchmarks.
Unfortunately, we can not state how much because the VMWare license
prohibits us from stating how much of a slow down and what benchmarks
have been run. We have seen a slowdown of 10% for a Linux
paravirtualized kernel for the same benchmarks running in OracleVM. We
can not publish these benchmarks based on the VMWare licensing
restrictions but are looking at publishing tpc like benchmark numbers
for OracleVM. 

There are a variety of ways to sahre disk on a
virtual server. Raw Disk, VMFS, OCFS. It is recommended to go with Raw
Disk because a database does not run well on a clustered file system
because it does not like waiting for locks or for synchronization.

2007 year in review

Picking up the idea to present “my blog-year in review”, by taking just the first complete sentence of each month’s post. 37 posts for a year isn’t much. I promise to do better this year. It also appears that the first post of every month is not my most interesting topic. My first sentence of every month is not very topical either. I once heard that if you loose someone in the first five minutes, you will never get them back. I need to work on better opening sentences (make it like a newspaper lead) and be more consistent with posting stuff.

January – 4 posts – I was reading a blog earlier today and the author wrote about how many machines that she used today.
February – 9 posts – for the next few weeks I am going to look at two topics
March – 3 posts – in playing with our new library system, I got distracted by questions from customers.
April – 2 posts – so for the past few days I have been playing with contentDB installation.
May – 3 posts – It is amazing how friendships help business relations.
June – 1 post – My wife and I agree that there are two things that we need to stay married.
July – 0 posts
August – 4 posts – I have had a discussion at a number of customers of how to expand their existing footprint.
September – 0 posts
October – 0 posts
November – 6 posts – Some things that I found interesting (at OpenWorld)…..
December – 5 posts – Ok, time to get back to an old topic that I need to finish. My kids school needs a library system.

New VM Hardware – part 2

Now that I have a system that will support HVM modules, I can create vm images using an iso. If I want to create an iso from a template, I can also do that.

1) download the templates from otn.oracle.com. Let’s start with the HVM Small x86. This is a 4G installation with 1 CPU and 1G of memory.

2) import the template into the vm master admin page. This is done by logging in as admin and going to the Resources -> Virtual Machine Templates. If we click on the Import button on the top right, we can import from a URL or from a seed_pool directory on the vm server. We will first do an Internal virtual machine template. This is very similar to importing an iso. You select the Server Pool Name then the Virtual Machine Template Name, then Operating system. Along with this you enter a vm username and password for managing this instance. If you create a directory in the /OVS/seed_pool directory called test01 you get an option of the vm template name from the pull down menu. The directory is expected to include a vm.cfg and system.img file where the img file is the disk for the template file. If a vm.cfg file does not exist, you will get Cannot obtain memory size from vm.cfg and stopped from continuing. If you have a vm.cfg file but no system.img file, you will get a Cannot obtain disk name from vm.cfg. If you have a system.img file it allows you to create a template with size of 1MB for the disk file. You can approve and use this test01 template if desired even though it will not do anything. If we use the small HVM template it generates an OVM_EL4U5_X86_HVM_4GB template with 6145 MB size of the disk. When I imported it it was configured to be Red Hat Enterprise Linux 4 with a memory size of 1024 MB. Once this is imported and activated, it is ready to run.

3) We can import from an external URL and it operates the same way that it did with the iso import. The files are copied to the /OVS/seed_pool directory and made ready for use. The files are copied to the vmserver as was done with the iso files.

4) at this point we are ready to create a virtual machine from a template. We will create a vm with the 4GB HVM template. To do this we go to the Virtual Machines tab and click on the Create Virtual Machine tab. We create virtual machine based on virtual machine template. Since we only have one pool, we select the vmserver-1 pool name. We select the OVM_EL4U5_X86_HVM_4GB template. We can expand the + next to the show link to see the os and memory allocation as well as the number of CPUs. Once we select this template we give it a machine name and console password. Once we confirm our creation with the name 4GB_HGM, we see a Creating status for the virtual machine. If we go to the vm server we see a directory /OVS/running_pool/90/4GB_HGM being populated. The two files that are created are the system.img and vm.cfg files. This does take a while because the system consumes 6GB of disk space according to the Size parameter that we see in the virtual machine screen. When the virtual machine is created it is placed in a Powered off state.

5) At this point we have two virtual images, one we created with an iso and one that we created with a template. If we run the template based vm we can manage it from the console. When we start and initialize the console we see the Linux boot sequence as expected. The system boots up to the localhost login: prompt with the banner that it is running Enterprise Linux AS release 4 (October Update 5) with Kernel 2.6.9-55.0.12.100.1.ELsmp. Note that this can be different than the kernel that is running in dom0 and typically is different.

6) If we want to look at the domains that are running on our vm server we can execute the xm list command. This returns two domains, 90_4GB_HGM with id of 9, mem of 1024, VCPUs of one and state of b as well as Domain-0 with id of 0, Mem of 512, VCPUs of 2 and State of r.
7) from the console we can login to the Linux installation with username oracle and password oracle. This takes us into a command line interface. We can do things like df -k to see that there is a 4GB disk (1.8 GB used and 2GB available) along with 512 MB swap area. If we look at the eth0 interface we can see that it did get assigned to the dhcp addresses available from the dhcp server. For our instance we got assigned to 192.168.1.114. We can ping and ssh into this box from out windows desktop and from the vm master. This os image also has X windows installed on it. We can start the windowing system by typing startx.

8) we can monitor the virtual machines that are running on this vm server using the
$ vm top
command. This lists the domains that are running as well as the one that is consuming the most resources. If we want to look at the console for this new machine we can do this by typing
$ xm console 90_4GB_HGM
This brings us to the console for the Linux command line interface. I was able to get X windows running in the console from the vm manager console. When I tried to start X windows from the laptop screen of the vm server I could not because X11 was already running on this box. When I shutdown the Linux operating system, the laptop display brought me back to the dom-0 console. When disconnected the old console to the vm image I was able to reconnect to the console through the VM Manager console and watch the system boot up. I could not see the console using the xm console 90_4GB_HGM command until the system finished booting. When I tried to get the X11 system up and running on the laptop I get the error message
“PAM authentication failed, cannot start X server. Perhaps you do not have console ownership?”. I will trace this down later but it appears that I can start the X windowing system from the VM Master console but not the laptop console. When I look at the users logged in with the “w” command I see that the vm manager console appears on tty1 and the laptop console appears on ttyS0. It looks like I might be able to start the X windowing system but a trick might be involved.

9) I tried a similar thing with the Windows installation. I started the WinXP instance and could attach to the console through the VM Master console. I could verify that the network got assigned to 192.168.1.113 as expected. I could active and register Windows appropriately. Everything looks good but the concept of starting a second console on Windows did not seem to work. When I typed xm console 82_winxp_test it failed the connection. Actually, it just sat there and I did not know how to get back to the Domain-0 console without halting the WindowXP instance. I’m sure that there is a way to get back but it appears that there is not any way to open the Windows desktop through anything other than vnc. I guess that I will have to play with getting X11 installed on domain-0 or trying to see if I can get the console re-routed to the laptop screen.

New VM Hardware and renewed effort

After weeks of waiting and pain in backing up my laptop, I got a new Dell Latitude D630. This system has the new virtualization hardware so that I can do a HVM (hardware virtualized machine). I tried doing this with a Dell D620 but the hardware did not support virtualization.

1) backup the existing laptop. I wanted to make sure I didn’t loose the WindowsXP image since they are getting harder and harder to find these days.

2) remove the disk that is in the laptop and put in another disk. I really believe in backing up an image that works. I didn’t want to scratch anything that works because getting it to work on the corp network is a real pain.

3) boot the laptop into the BIOS and turn on vitrualization. By default it is turned off. It was a little difficult to find but it was under the system settings and was clearly labeled virtualization. I had to walk through all options to find it.

4) boot the laptop from the VM Server cd. This allows me to configure and start the VM server on the laptop. The system comes up in a Linux kernel with no X Window operating system. The command line login: is the only friendly prompt presented.

5) configure the VM Master to recognize the VM Server as a system that I can manipulate. I also made this system the pool master and utility server. I typically would have made another machine this but given that I only have two laptops I did what I could. In retrospect, it would have been nicer to have the VM Master be the utility and pool master but that isn’t possible since the pool master has to be part of the pool. It would have been nice to make the VM Master the utility server as well.

6) download the HVM and PVM templates from the otn.oracle.com site. I downloaded all of them so that I could play with them.

7) download the Linux iso files so that I can play with installing and configuring virtual machines from scratch if needed and create my own templates

8) convert a Windows installation CD into an iso so that I can boot from it as well. I did this on the VM Server by using the following command
$ dd if=/dev/cdrom of=WinXP.iso
When the command finishes I have a bootable copy of the Windows XP CD and just need to keep the product key handy for the installation.

9) Just for grins I also downloaded the Solaris 10 x86 media and the Knoppix 5.1.1 media so that I can test different operating systems.

10) Once I had all the software downloaded, I had to copy it to the templates and iso images to my VM Master. The configuration that I have is as follows

home computer ———> hardware router  ——-> internet connection
                                              ^
                                               |
       VM Master —————
               |                                |
               —–>wireless hub    |
                              ^                |
                               |                |
       VM Server —-                |
               |                                |
               ———————-

The ip address used by the home computer is 192.168.1.100. The VM Master is configured to be dhcp on the wireless hub and appears as 192.168.1.103. I wanted the VM Server to also work on the wireless hub but the installation did not see the wireless network when I installed the vm software. I configured the VM Master on the hardware connection to be at 192.168.2.222 and the VM Server to be at 192.168.2.200. These addresses need to be static and I wanted to see if I could use both the static and wireless interfaces on two different networks. The benefit of this is that I should be able to use the hardwired network for private communications and the wireless network for internet connection. The drawback to this configuration is that the VM Server did not recognize the wireless network so any communications to this system had to happen through the VM Master. I think that this is a problem that I can solve but I have not spent the required time to get it working. I have gotten spoiled with the GUI interfaces in Linux and forgot how to plumb a network interface and enable it from the command line. I think that I will try this after I get a few operating systems installed and working.

11) now that I have the iso images and templates on the VM Master, I need to configure the images to work on the VM Server. To do this I go into the console (http://localhost:8888/OVS) and login as admin. I created an account for myself and gave the account admin privs but I do not see the resources or approvals listings for images imported. I need to research this more because I do not want to have everything go through the admin account for iso and template approvals. To get the images to the VM Server I either need to ftp the images there, configure the VM Master to be an http server, or configure the VM Master to be an ftp server. I copied some of the images to the /OVS/iso_pool and /OVS/seed_pool directories so that I could import a local ISO or local template. This is done by going to the Resources tab, selecting the ISO Files sub-tab and clicking the import button. You get the option of External ISO or Internal ISO. If you pick internal, it looks in the directory structure of the /OVS/iso_pool and picks a directory there to import. If there are no new directories that have not already been imported, nothing will be shown to you. When you select internal iso you get a screen that shows Server Pool Name, ISO Group, ISO Label, and Description. It is important to note that the order of selection is critical. You can not select an ISO label before you have selected a pool name and ISO group. You need to select the Pool Name first. When you pull down this menu, it shows all of the pool servers that you have configured. In my case, there is one option, VMServer-1. Once I select this, the ISO Group pull down gets populated. If there are no new directories or iso files, you are not presented with an option and need to logout and log back in to clear the logic. This is a little crazy but it works. If you cancel, the cookies and scripting mechanism do not get properly cleared. If there are new iso directories and iso files in the iso_pool directory, the ISO Group label will reflect the directory name. You can easily test this by typing the following commands
$ cd /OVS/iso_pool
$ mkdir test
$ cd test
$ touch test.iso
When
you go into the ISO Internal import you should see the VMServer-1 as the server pool name, test as the iso group and test.iso as the iso label. The directory that we created, test, is effectively the ISO group. The file that we created, test.iso, is the ISO label that we create. We can create more files in this directory and associate them with the iso group. This is typically the case when you download four or five disks that make up a Linux distribution. If you can download a DVD image of the distribution, you will typically only have one file associated with an iso group. If there are multiples, you need to associate multiple iso labels to the iso group.

12) Now that we have a successful iso defined with a local import, let’s look at importing from an external iso. To test this, I configure the vmmaster to be an http server. To do this I configure the apache server that is installed with a typical Linux installation. I had to rename the /etc/httpd/conf.d/ssl.conf file to /etc/httpd/conf.d/ssl.conf.orig to disable the ssl connection. I didn’t want to hassle with certificate configuration and just wanted to get a simple http server up and running. I then created a link in the /var/www/html page to point to the directory that has all of the iso images. This was done with a ln -s /oracle/vm /var/www/html/vm. It is important to note that this worked because the /var and the /oracle directories are on the same disk. If they were not I would have to create a hard link or copy the files to the /var disk. Once I got this configured I was able to start the http server by executing the command
$ mv /etc/httpd/conf.d/ssl.conf /etc/httpd/conf.d/ssl.conf.orig
$ ln -s /oracle/vm /var/www/html/vm
$ sudo /etc/rc3.d/K15httpd start
The start responds with an error or an OK. The OK shows that the apache server is up and running. At this point I can go to a web browser on my home computer or on the vmmaster and goto http://vmmaster/vm and see a directory listing of the iso images.

13) Once I have the iso images available from http, I then go back to the external iso import. This menu option presents a server pool name, an iso group, an iso label, a url, and a description. The pool name is a pull down and only gives us the option of VMServer-1 since we have only one pool server. The ISO Group, ISO Label, and URL are text fields that we need to enter. I executed the following commands on the vm master to configure a dummy iso file as we did earlier on the vm server.
$ cd /oracle/vm
$ mkdir junk
$ touch junk/junk.iso
At this point we should be able to see the iso file by going to http://vmmaster/vm/junk and see the file of zero length. Just as a test, let’s enter test2 as the ISO group and test2 as the ISO label using http://vmmaster/vm/junk/junk.iso as the URL. When I do this I get a confirmation screen and get routed back to the list of iso files. In this wel see the ISO Label of test2, ISO group of test2, File Name of test2.iso, Status of Importing. When we refresh we see a status of Pending. We can approve this by clicking on the radio button to the left of the ISO label and selecting the Approve button above it. Once we approve it we see the Status of active. The directory structure created on the vm server is /OVS/iso_pool/test2 with a file containing test2.iso. It is important to note that even though we labeled the directory junk and file junk.iso on the vm master, the file comes across as test2 and test2.iso on the vm server. I was a little surprised by the external import function. I thought it would just register the location of the iso file and allow me to use it across the network. This is not the case. It actually copies the file from the URL location and copies it into the iso_pool directory. I would have preferred that it left it on the other server and left room for the vm images on the vm server.

14) Now that I have a proper iso image, I can create a virtual machine. This is done by going to the Virtual Machines tab and clicking on the Create Virtual Machine button at the top right. You are given the choice of installing from a template or media. We will choose to install via the media. The machine will go into the VMServer-1 server pool (since this is the only one that we have). We will select an iso file to boot from. In this instance I select the WinXP.iso file just to see if the Windows media installs properly. I also have the option of installing a fully virtualized vm or a paravirtualized machine. For Windows I have to pick the fully virtualized. The next screen allows us to name out vm, allocate memory, cpu, and disk as well as setting our console password. For this example I will create a 1 CPU system with 1G of memory and 8G of disk. Before I create the image, I make sure that I have 8G of disk space in the /OVS directory on the vm server. The file will be created in the /OVS/running_pool with the name I specify prefixed with a number. This number is generated by the vm master to uniquely identify an image just in case someone picks a previously used name. In this example I pick the Virtual Machine Name of test01, Operating System of Other, 1 CPU, 1024 of memory, 8096 of disk, and set the password to something I can remember. I also check the network interface card presented. In my case there is one and only one choice so it does not matter. Once I confirm this I get dropped back in the Virtual machines tab and have a status of Creating for the new machine that I created. This creates activity on the vmserver constructing the /OVS/running_pool/84_test01/System.img and /OVS/running_pool/84_test01/vm.cfg files. The System.img file is the disk for the virtual machine. Everything is kept in this file can can be moved from system to system as needed to run the vm on another hardware box. The vm.cfg file defines the operating parameters for the virtual machine including the device modek, disk location, kernel loader, and memory to allocate. It also associates a network interface and vnc parameters for remote console. The vncpassword is stored clear text in this file so it is important to make sure that your root password is not the same or a derivative of this password. The generation takes a while and when it is finished some audit logs are displayed to the vm server console and the status changes from  creating to Running.

15) Now that I have a running virtual machine, I need to connect to the console. To do this I use the browser on the vmmaster. In the Virtual Machines tab I select the test01 vm and click on the Console button. When I first tried this on the VM Master box, I got a plug-in required. I tried going to download the plugin but did not find one. From the documentation I realized that I needed to configure the Firefox browser with the proper plugin and restart the browser
# cp /opt/ovm-console/etc/mozpluggerrc /etc/
# cp /opt/ovm-console/bin/* /usr/bin
# cp /opt/ovm-console/lib/mozilla/plugins/ovm-console-mozplugger.so
/home/oracle/firefox/plugins
I had to use the /home/oracle/firefox directory because I linked this installation to the browser launch button. I did not want to use the default browser that comes with Linux and preferred to use FireFox which I downloaded and installed separately. I tried to do the same thing from my home desktop which is running WinXP but the connection failed because I did not have a good java installation. I decided to work on this later and not work on it now.

16) once I got the console up and running (and the right password entered), I saw the boot screen for Windows XP. Everything worked fine until it came time to reboot the first time. The system went through all of the questions and selections and started to reboot. When this happened, something went wrong with the console connection. The vm never rebooted properly and I had to halt the vm, disconnect from the console, and restart the vm. I would have liked to keep the console up and running and watch the vm restart but I could not. If I have the console up and shutdown the virtual machine, the console shows that it was showing before the shutdown. I woul
d have expected to see the shutdown procedure or a message that the virtual machine was down but it remained a static image of what happened before the shutdown. To fix this I had to exit the console, restart the virtual machine, and restart the console. This isn’t a big issue, just something that I didn’t expect.

17) Once I got everything running and the way I wanted after a reboot I was able to shutdown the windows image into hibernation state as well as power off state. The resume from hibernation took me to the Windows login as expected. If I select the Turn Off Computer from Windows I see the Windows is shutting down note and from the vm master it appears that it is still running properly. I never see the shutdown or see the virtual machine go to a power off condition. This is a little disconcerning because I don’t know when it is ok to power off the machine. If windows is still saving state, I might loose something with a power off. It would be nice if the operating system power off would signal something to the vm master and put the machine in a power off state. This might be caused by the vm.cfg file parameters. I currently have on_crash and on_reboot set to restart. I might need to set them to something else or set another parameter to something different to power off the vm.

18) I was also able to delete the vm image once it was powered down. If the image is running (even though the OS is shutdown) I am not give the option of delete. I can only power off or launch the console. If I shutdown Windows, exit the console, and try to relaunch the console I get a Connection to vnc server failed: Connection refused. This is a good indication that Windows has finished but something more elegant would be appreciated. At this point I can power on the vm, delete it, clone it, save it as a template. If I try to do something like pause a powered off vm, I get an error message that the Virtual machine is not ready to be paused. I can clone the powered off vm. I just have to enter a new name, the number of copies, the pool server, and the group name that I want to copy it to. I see this being something very useful for development. I can work on something and create a golden master of a release. I can then close the os and application installation and branch the image for other people. That way we don’t corrupt each others test environments. In my instance I choose to create a virtual machine name: test020 with one copy into group name test.When I get back to the Virtual Machine page I see that winxp_test is in a state of cloning and test020 is in a state of creating. If I look on the vm server I see /OVS/running_pool/86_test020 being populated with a System.img and vm.cfg file. Once the audit record is displayed on the screen I can see that both images are in a powered off status. This process takes a while because the 8G System.img file is copied to the new directory.

more later…..

Career Advancement

I have recently found linkedin.com, a site that allows you to create a list of friends, associates, and former co-workers. It is an interesting site. I have been exchanging emails with people that I haven’t talked to in years. I have a handful of people that I talk to on a regular basis either through google talk or email but that number has been shrinking over the years. There are some people that I talked to on a daily basis when we worked together but once the physical proximity stopped, we stopped talking as much. Now that we are moving to a hotel type office where you check out a cubicle and store everything in a drawer, I don’t see myself going to the office as much. That plus the $10 per day parking fee limits my desire to drive downtown. This made me think about the people I work with differently. The casual meetings to talk about a football team or a tv show just won’t happen as much. When I go into the office it will be for a purpose and not for social reasons.

Linkedin got me thinking about how I communicate with people and what I typically say. It also got me thinking about how people’s careers have changed or not changed over the years. A few of the people that were my peers are now a CTO, CEO, or owner of their own company. This isn’t true for everyone. I was reading Adrian Cockroft’s blog and I think that success can be directly associated with the amount of sharing that you do.

If you collect information and share it freely, you will become an
expert. You become a magnet for questions, issues and information. Some
people tightly control their expertise, this is a huge mistake as it
gives others incentives to look for alternative experts or to become
experts themselves. If you give out your expertise freely, you become
the go-to person in your field, and gather many more recommendations
from the people you help. Try it!

I have tried this and it is true. If people come to you for answers and you share the questions with others more people will come to you because they know that you researched this before and probably know the answer. When I was at Sun, my manager challenged me to become an expert on any topic of my choosing. I did this and was voted the Solaris expert by my peers because I asked and answered more questions than anyone else. It didn’t mean that I knew more than everyone, or even anyone, it just meant that I was willing to risk it and share. The people who I respected for their ability to share are the ones that are now the executives or own their own businesses. True, some people changed careers completely, but the majority are doing something similar to what they did 10 years ago. Some have become managers, some are doing exactly the same thing just for another company. I personally don’t think that it is a bad thing doing the same thing, it is what I am doing. I have been given the chance to start a new company or start a company of my own. I always hesitated when it came to taking the plunge because I understood the price required by my family for that level of effort. I read somewhere that one of the astronauts that walked on the moon is still married and the rest of the divorced. My theory is that they gave their all to reaching the moon and coming back safely at the expense of their family life. I am not willing to make that sacrifice for the eventual reward of a position.

Enough waxing and waining. I recommend that everyone look at the network of friends and co-workers that you have and ask two questions. First, am I open with information and ideas or do I hold information back, not document procedures and programs so that I can be the expert. Think of it differently. If you share all of this information, many more people will come to you and give you more knowledge than you could find on your own. Second, are the people you work with people that you want to keep in touch with ten years from now? What is it that keeps you together? Is it really just the Monday morning meeting and you dive for your office quicker than the coffee gets cold? Is it working on a project or idea? Is it that your kids are in school together? How will you keep this conversation going ten years from now? Are you building a good foundation and a relationship that provides value to both of you?

book review – Virtualization with Xen by David E. Williams

As part of my learning process, I orderd “Virtualization with Xen” by David E. Williams and Juan Garcia. The book focuses on virtual machines based on the Xen technology.

An outline of the book is:

1. Introduction to Virtualization

 – what is virtualization, why virtualize, how does virtualization work, types of virtualization, common use case for virtualization

2. Introducing Xen

 – What is Xen, Xen’s Virtualization model, CPU Virtualization, Memory Virtualization, I/O Virualization

3. Deploying Xen

 – Instaling Xen on Free Linux, Installing the XenServer product family, other Xen installation methods, Configuring Xen

4. The Admin Console

 – Native Xen command line tools, XenServer Admin Console, Using the admin console

5. Managing Xen with Third Party Management Tools

 – Qluster openQRM, intalling openQRM, Enomalism, Project ConVirt and XenMan

6. Deploying a Virtual Machine in Xen

 – workload planning and VM Placement, installing modified guests, installing unmodified guest, installing windows guests, physical to virtual migrations of existing systems, P2V migration, importing and exporting existing VMs

7. Advanced Xen Concept

 – advanced storage concepts, advanced networking concepts, building a Xen cluster, XenVM migration, XenVM backup and recovery, full virtualization in Xen

8. Future of Virtualization

 – unofficial Xen road map, virtual infrastructure in tomorrow’s datacenter

Appendix A – glossay of terms

Appendix B – other virtualization technogies and how they compare

In my opinion, this is a good background book. It does help sort out some of the technology behind the OracleVM but much of the book does not apply. The cost justification argument is an interesting discussion but difficult to apply. The discussion that you can save in power and cooling cost per year by using two servers instead of five servers that are 15% utilized is a difficult argument to follow. For most data centers, the servers will not be turned off for a log period but retasked to another application. If an application server or web server were 15% utilized, clustering at the software layer would typically be done. This is probably the weakest part of the discussion but overall the book is a very good technical book. The discussion on consolidation, reliability, and security is a good one. Few people talk about security when they talk about virtualization. The discussion on memory and IO management is a very good description. Inside the Xen kernel is a way of partitioning IO so that one domain does not dominate all of the IO and the same is true for memory. The installation and configuration chapter is a good discussion that applies to the OracleVM but the Admin console an other tools does not really apply. The chapter on third-party management tools also does not apply to the OracleVM installation. The deploying a virtual machine in Xen assumes that you are using the Xen console and shows procedures and processes specific to this type of installation. This chapter does not match directly to the OracleVM installation process and procedures. The advanced Xen concepts chapter is probably the jem of this book.

Overall, the $60 for this book was money well spent. 200 of the 350 pages are relevant and well written. The two CDs that are included with the book contains a bootable image to install Xen on your computer as well as documentation and Linux packages for the management console. The distribution comes with Xen 3.2.0 and the Linux tools discussed in the book.