next generation of compute services

Years ago I was a systems administrator at a couple of universities and struggled making sure that systems were operational and supportable. The one thing that frustrated me more than anything else was how long it took to figure out how something was configured. We had over 100 servers in the data center and on each of these server we had departmental web servers, mail servers, and various other servers to serve the student and faculty users. We standardized on an Apache web server but there were different versions, different configurations, and different additions to each one. This was before virtualization and golden masters became a trendy topic and things were built from scratch. We would put together Linux server with Apache web servers, PHP servers, and MySQL. These later became called LAMP servers. Again, one frustration was the differences between the different versions, how they were compiled, and how they were customized to handle a department. It was bad enough that we had different Linux versions but we had different versions of every other software combination. Debugging became a huge issue because you first had to figure out how things were configure then you had to figure out where the logs were stored and then could start looking at what the issue was.

We have been talking about cloud compute services. In the past blogs we have talked about how to deploying an Oracle Linux 6.4 server onto compute clouds in Amazon, Azure, and Oracle. All three look relatively simple. All three are relatively robust. All three have advantages and disadvantages. In this blog we are going to look at using public domain pre-compiled bundles to deploy our LAMP server. Note that we could download all of these modules into out Linux compute services using a yum install command. We could figure out how to do this or look at web sites like digitalocean.com that go through tutorials on how to do this. It is interesting buy I have to ask why. It took about 15 minutes to provision our Linux server. Doing a yum update takes anywhere from 2-20 minutes based on how old you installation is and how many patches have been released. We then take an additional 10-20 minutes to download all of the other modules, edit the configuration files, open up the security ports, and get everything started. We are 60 minutes into something that should take 10-15 minutes.

Enter stage left, bitnami.com. This company does exactly what we are talking about. They take public domain code and common configurations that go a step beyond your basic compute server and provision these configurations into cloud accounts. In this blog we will look at provisioning a LAMP server. We could have just as easily have configured a wiki server, tomcat server, distance education moodle server, or any other of 100+ public domain configurations that bitmai supports.

The first complexity is linking your cloud accounts into the bitnami service. Unfortunately, the accounts are split into three different accounts; oracle.bitnami.com, aws.bitnami.com, and azure.bitnami.com. The Oracle and Azure account linkages are simple. For Oracle you need to look up the rest endpoint for the cloud service. First, you go to the top right, click the drop down to do account management.

From this you need to look up the rest endpoint from the Oracle Cloud Console by clicking on the Details link from the main cloud portal.

Finally, you enter the identity domain, username, password, and endpoint. With this you have linked the Oracle Compute Cloud Services to Bitnami.

Adding the Azure account is a little simpler. You go to the Account – Subscriptions pull down and add account.

To add the account you download a certificate from the Azure portal as described on the bitnami.com site and import it into the azure.bitnami.com site.

The Amazon linkage is a little more difficult. To start with you have to change your Amazon account according to Bitnami Instructions. You need to add a custom policy that allows bitnami to create new EC2 instances. This is a little difficult to initially understand but once you create the custom policy it becomes easy.

Again, you click on the Account – Cloud Accounts to create a new AWS linkage.

When you click on the create new account you get an option to enter the account name, shared key, and secret key to your AWS account.

I personally am a little uncomfortable providing my secret key to a third party because it opens up access to my data. I understand the need to do this but I prefer using a public/private ssh key to access services and data rather than a vendor provided key and giving that to a third party seems even stranger.

We are going to use AWS as the example for provisioning our LAMP server. To start this we go to http://aws.bitnami.com and click on the Library link at the top right. We could just as easily have selected azure.bitnami.com or oracle.bitnami.com and followed this exact same path. The library list is the same and our search for a LAMP server returns the same image.

Note that we can select the processor core count, disk size, and data center that we will provision into. We don’t get much else to choose from but it does the configuration for us and provisions the service in 10-15 minutes. When you click the create key you get an updated screen that shows progress on what is being done to create the VM.

When the creation is complete you get a list of status as well as password access to the application if there were a web interface to the application (in this case apache/php) and an ssh key for authentication as the bitnami user.

If you click on the ppk link at the bottom right you will download the private ssh key that bitnami generates for you. Unfortunately, there is not a way of uploading your own keys but you can change that after the fact for the users that you will log in as.

Once you have the private key, you get the ip address of the service and enter it into putty for Windows and ssh for Linux/Mac. We will be logging in as the user bitnami. We load the ssh key into the SSH – Auth option in the bottom right of the menu system.

When we connect we will initially get a warning but can connect and execute common commands like uname and df to see how the system is configured.

The only differences between the three interfaces is the shapes that you can choose from. The Azure interface looks similar. Azure has fewer options for processor configuration so it is shown as a list rather than a sliding scale that changes the processor options and price.

The oracle.bitnami.com create virtual machine interface does not look much different. The server selection is a set of checkboxes rather than a radio checkbox or a sliding bar. You don’t get to check which data center that you get deployed into because this is tied to your account. You can select a different identity domain which will list a different data center but you don’t get a choice of data centers as you do with the other services. You are also not shown how much the service will cost through Oracle. The account might be tied to an un-metered service which comes in at $75/OCPU/month or might be tied to a metered service which comes in at $0.10/OCPU/hour. It is difficult to show this from the bitnami provisioning interface so I think that they decided to not show the cost as they do with the other services.

In summary, using a service like bitnami for pre-configured and pre-compiled software packages is the future because it has time and cost advantages. All three cloud providers have marketplace vendors that allow you to purchase commercial packages or deploy commercial configurations where you bring your own license for the software. More on that later. Up next, we will move up the stack and look at what it takes to deploy a the Oracle database on all three of these cloud services.