Storage in the Oracle Cloud

This week we are going to focus on storage. Storage is a slippery slope and difficult conversation to have. Are we talking about a file synchronization like dropbox.com, google.com/docs, or box.com? Are we talking about raw block storage or long term archive storage? There are many services available from many vendors. We are going to focus on block storage in the cloud that can be used for files if desired or for backups of databases and virtual machines. Some of the cloud vendors have specific focused storage like Azure tables that offer a noSQL type storage or Amazon S3 allowing you to run a website without a web server. Today we will look at the Oracle IaaS Storage set of products. This is different than the Oracle PaaS Documents option which is more of a Google Docs like solution. The IaaS Storage is a block of storage that you pay for either on a metered usage or non-metered usage basis.


Notice from the cloud.oracle.com web page, we click on Infrastructure and follow the Storage route. We see that we get the raw block storage or the archive storage as options. We also have the option of an on-site cache front end that reduces latency and offers an NFS front end to the users providing more of a document management strategy rather than a raw block option.

Before we dive a little deeper into the options and differences between the storage appliance, spinning disk, and spinning tape in the cloud, we need to have a discussion about pricing and usage models. If you click on the Pricing tab at the top of the screen you see the screens below.


Metered pricing consists of three parts. 1) how much storage are you going to start with, 2) how much storage are you going to grow to, and 3) how much are you going to read back? Metering is difficult to guestimate and unfortunately it has a significant cost associated with being wrong. Many long term customers of AWS S3 understand this and have gotten sticker shock when the first bill comes in. The basic cost for outbound transfer is measured on a per GB basis. The more that you read across the internet, the more you pay. You can circumvent this by reading into a compute server in the Oracle cloud and not have to pay the outbound transfer. If, for example, you are backing up video surveillance data and uploading 24 hours of video at the end of they day, you can read the 24 hour bundle into a compute server and extract the 10-15 minutes that you are interested in and pay for the outbound charges on compute for the smaller video file.

Non-Metered pricing consists of one part. How much storage are you going to use over the year. Oracle does not charge for the amount of data transferred in-bound or out-bound with this storage. You can read and write as much as you want and there is no charge for data transfer across the internet. In the previous example you could read the 24 hours of video from the cloud storage, throw away 90% of it from a server in your data center, and not incur any charges for the volume of transfer.

Given that pricing is difficult to calculate, we created our own spreadsheet to estimate pricing as well as part numbers that should be ordered when consuming Oracle cloud resources. The images below show the cost of 120 TB of archive storage, metered block storage, and non-metered block storage.



Note that the data transfer price is non-trivial. Reading the data back from the cloud can get significantly more expensive than the cost of the storage itself. A good rule of thumb is the cost of spinning disk in the cloud should not exceed $30/TB/month or $400/TB/year. If you look at the cost of a NetApp or EMC storage system, you are looking at $3K-$4K/TB purchase price with 10% annual maintenance per year ($300-$400). If you are currently running out of storage and your NFS filer is filling up, you can purchase cloud resources for a few months and see if it works. It won’t cost you anything more than paying support and you can grow your cloud storage as needed rather than buying 3 years ahead as you would with a filer in your data center. The key issue with cloud storage is latency and access times. Access to a filer in your data center is typically 10ms where access time to cloud storage is typically 80+ms. All cloud storage vendors have on site appliance solutions that act as cache front ends to address this latency problem. Oracle has one that talks NFS. Amazon has one that talks iSCSI. Microsoft has one that talk SMB. There truly is no single vendor with a generic solution that addresses all problems.

Enough with the business side of storage. Unfortunately, storage is a commodity so the key conversation is economics, reliability, and security. We have already addressed economics. When it comes to reliability the three cloud vendors address data replication and availability in different ways. Oracle triple mirrors the data and provides public-private key encryption of all data uploaded to the cloud. Data can be mirrored to another data center in the same geography but can not be mirrored across an ocean. This selection is done post configuration and is tied to your account as a storage configuration.

Now to the ugly part of block storage. Traditionally, block storage has been addressed through an operating system as a logical unit or aggregation of blocks on a disk drive. Terms like tracks and sectors bleed into the conversation. With cloud storage, it is not part of the discussion. Storage in the cloud is storage. It is accessed through an interface called a REST api. The data can be created, read, updated, and deleted using html calls. All of this is documented in the Oracle Documents – Cloud Storage web site.

The first step is to authenticate to the cloud site with an instance name, username, and password. What is passed back is an authentication token. Fortunately, there are a ton of tools to help read and write HTML code and are specifically tuned to help create headers and JSON structured data packets for the REST api interfaces. The screen below shows the Postman interface available through Chrome. A similar one exists for Firefox called RESTClient API. Unfortunately, there is no extension for Internet Explorer.

The first step is to get an auth header by typing in the username and password into the Basic Authentication screen.

Once we are authorized, we connect to the service by going to https://storage.us2.oraclecloud.com/v1/Storage-(identity domain) where identity domain is the cloud provider account that we have been assigned. In our example we are connecting to metcsgse00029 as our identity domain and logging in as the user cloud.admin. We can see what “containers” are available by sending a GET call or create a new container by sending a PUT call with the new container name at the end of our html string. I use the word container because the top level of storage consists of different areas. These areas are not directories. They are not file systems. The are containers that hold special properties. We can create a container that is standard storage which represents spinning disk in the cloud or we can create a container that is archive storage which represents a tape unit in the cloud. This is done by sending the X-Storage-Class header. If there is no header, the default is block storage and spinning disk. If the X-Storage-Class is assigned to Archive it is tape in the cloud. Some examples of creating a container are shown below. We can do this via Postman inside Chrome or a command line

From the command line this would look like

export OUID=cloud.admin
export OPASS=mypassword
export ODOMAIN=metcsgse00029
c url -is -X GET -H "X-Storage-User:Storage-$ODOMAIN:$OUID"
-H "X-Storage-Pass:$OPASS"
https://$ODOMAIN.storage.oraclecloud.com/auth/v1.0

This should return an html header with HTTP 200 OK and an embedded header of X-Auth-Token: AUTH_tk578061b9ae7f864ae9cde3cfdd75d706. Note that the value after the X-Auth-Token is what we will use to pass into all other requests. This token will change with each request and is good for 30 minutes from first execution. Once we have the authentication finished we either change the request type from a GET to a PUT and append the container name to the end. The screen above shows how to do this with Postman. The results should look like the screen below. We can do this from the command line as show below as well.

c url -is -X PUT -H "X-Auth-Token: AUTH_tk578061b9ae7f864ae9cde3cfdd75d706"
https://storage.us2.oraclecloud.com/v1/Storage-$ODOMAIN/new_area

In this example we create a new container from the command line called new_area. We can verify this by reviewing the cloud storage by changing the PUT to a GET.

c url -is -X GET -H "X-Auth-Token: AUTH_tk578061b9ae7f864ae9cde3cfdd75d706"
https://storage.us2.oraclecloud.com/v1/Storage-$ODOMAIN


Both of these methods allow us to see the storage that we created. I personally do not like this interface. It is not intended to be human consumable. Uploading and downloading a file is difficult at best. A user interface that makes dragging and dropping files is desirable. This is where dropbox and google docs shine. They allow you to drag and drop as well as synchronize directories to cloud storage. The Oracle Storage Cloud is not intended to be this solution. It is designed so that you can drop a new library into your rman backup and backup straight from your database to the cloud. You can point your ComVault or Legato backup software to a cloud instance and replicate your data to the cloud. If you want a human readable interface you need to purchase something like the Cloudberry Explorer from Cloudberry. This give you a Windows Explorer like interface and allows your to drag and drop files, create containers and directories, and schedule archives or backups as desired.



Note that the way that you create a block storage container vs an archive container is a simple menu selection. Retrieving the archive storage is a little more complex because the tape unit must stage the file from the tape to disk and notify you that the restoration has been completed. This is a little more complex and we will defer this discussion to a later blog.

Copying files is little more than dragging and dropping a file between sections of a window in Cloudberry.

For completeness, I have included the command line screen shots so that you can see the request/response of a command line interaction.



It is important to remember our objective. We can use the cloud block storage as a repository for things like database and a holding point for our backups. When we configure a database in the cloud, we backup and restore from this storage. This is configured in the database provisioning screen. The Storage-metcsgse00029/backup is the location of RMAN backup and restores. The backup container is created through the REST api or Cloudberry interface. We can also attach to the cloud storage through the cloud storage appliance software which runs inside a virtual machine and listens for NFS requests and translates them into REST api calls. A small disk is attached to the virtual machine and it acts as a cache front end to the cloud storage. As files are written via NFS they are copied to the cloud storage. As the cache fills up, files contents are dropped from local storage and the metadata pointing to where the files are located are updated relocating the storage to the cloud rather than the cache disk. If a file is retrieved via NFS, the file is read from cache or retrieved from the cloud and inserted into the cache as it is written to the client that requested it.



In summary, we covered the economics behind why you would select cloud storage over on site storage. We talked about how to access the storage from a browser based interface, web based interface, or command line. We talked about improving latency and security. Overall, cloud based storage is something that everyone is familiar with. Products like Facebook, Picaso, or Instagram do nothing more than store photos in cloud storage for you to retrieve when you want. You pay for these services by advertisements injected into the web page. Corporations are turning more and more towards cloud storage as a cheaper way to consume long term storage at a much lower price. The Oracle Storage Cloud service is first of three that we will evaluate this week.

installing Tomcat on Docker

A different way of looking at running Tomcat is to ignore the cloud platform and install and configure everything inside a docker instance. Rather than picking a cloud instance we are going to run this inside VirtualBox and assume that all of the cloud vendors will allow you to run Docker or configure a Docker instance on a random operating system. What we did was to initially install and configure Oracle Enterprise Linux 7.0 from a iso into VirtualBox. We then installed Docker with the command and start the service

sudo yum install docker
sudo systemctl start docker

We can search for a Tomcat installation and pull it down to run. We find a Tomcat 7.0 version from the search and pull down the configuration

docker search tomcat
docker pull consol/tomcat-7.0


We can run the new image that we pulled down with the commands

docker run consol/tomcat-7.0
docker ps

The docker ps command allows us to look at the container id that is needed to find the ip address of the instance that is running in docker. In our example we see the container id is 1e381042bdd2. To pull the ip address we execute

docker inspect -f format='{{.NetworkSettings.IPAddress}}' 1e381042bdd2

This returns the ip address of 172.17.0.2 so we can open this ip address and port 8080 to see the Tomcat installation.



In summary, this was not much different than going through Bitnami. If you have access to docker containers in a cloud service then this might be an alternative. All three vendors not only support docker instances but all three have announced or have docker services available through IaaS. Time wise it did take a little longer because we had to download an operating system as well as Java and Tomcat. The key benefit is that we can create a master instance and create our own docker image to launch. We can script docker to restart if things fail and do more advanced options if we run out of resources. Overall, this might be worth researching as an alternative to provisioning and running services.

technical diversion – DBaaS Rest APIs

We are going to take a side trip today. I was at Collaborate 2016 and one of the questions that came up was how do you provision 40 database instances for a lab. I really did not want to sit and click through 40 screens and log into 40 accounts so I decided to do a little research. It turns out that there is a relatively robust REST api that allows you to create, read, update, and delete database instances. The DBaaS Rest Api Documentation is a good place to start to figure out how this works.

To list instances that are running in the database service use the following command, note that “c url” should be shortened to remove the space. Dang blogging software! Note to make things easier and to allow us to script creation we define three variables on the command line. I did most of this testing on a Mac so it should translate to Linux and Cygwin. The three variables that we need to create are

  • ODOMAIN – instance domain that we are using in the Oracle cloud
  • OUID – username that we log in as
  • OPASS – password for this instance domain/username

export ODOMAIN=mydomain
export OUID=cloud.admin
export OPASS=mypassword
c url -i -X GET -u $OUID:$OPASS -H "X-ID-TENANT-NAME: $ODOMAIN" -H "Content-Type:application/json" https://dbaas.oraclecloud.com/jaas/db/api/v1.1/instances/$ODOMAIN

What should return is

HTTP/1.1 200 OK
Date: Sun, 10 Apr 2016 18:42:42 GMT
Server: Oracle-Application-Server-11g
Content-Length: 1023
X-ORACLE-DMS-ECID: 005C2NB3ot26uHFpR05Eid0005mk0001dW
X-ORACLE-DMS-ECID: 005C2NB3ot26uHFpR05Eid0005mk0001dW
X-Frame-Options: DENY
X-Frame-Options: DENY
Vary: Accept-Encoding,User-Agent
Content-Language: en
Content-Type: application/json
{"uri":"https:\/\/dbaas.oraclecloud.com:443\/paas\/service\/dbcs\/api\/v1.1\/instances\/metcsgse00027","service_type":"dbaas","implementation_version":"1.0","services":[{"service_name":"test-hp","version":"12.1.0.2","status":"Running","description":"Example service instance","identity_domain":"metcsgse00027","creation_time":"Sun Apr 10 18:5:26 UTC 2016","last_modified_time":"Sun Apr 10 18:5:26 UTC 2016","created_by":"cloud.admin","sm_plugin_version":"16.2.1.1","service_uri":"https:\/\/dbaas.oraclecloud.com:443\/paas\/service\/dbcs\/api\/v1.1\/instances\/metcsgse00027\/test-hp"},{"service_name":"db12c-hp","version":"12.1.0.2","status":"Running","description":"Example service instance","identity_domain":"metcsgse00027","creation_time":"Sun Apr 10 18:1:21 UTC 2016","last_modified_time":"Sun Apr 10 18:1:21 UTC 2016","created_by":"cloud.admin","sm_plugin_version":"16.2.1.1","service_uri":"https:\/\/dbaas.oraclecloud.com:443\/paas\/service\/dbcs\/api\/v1.1\/instances\/metcsgse00027\/db12c-hp"}],"subscriptions":[]}

If you get back anything other than a 200 it means that you have the identity domain, username, or password incorrect. Note that we get back a json structure that contains two database instances that were previously created, test-hp and db12c-hp. Both are up and running. Both are 12.1.0.2 instances. We don’t know much more than these but can dive a little deeper by requesting more information by included the service name as part of the request. A screen shot of the deeper detail is shown below.

A list of the most common commands are shown in the screen shot below

The key options to remember are:

  • list: -X GET
  • stop: -X POST –data ‘{ “lifecycleState” : “Stop” }’
  • restart: -X POST –data ‘{ “lifecycleState” : “Restart” }’
  • delete: -X DELETE **need to add the instance name at the end, for example db12c-hp in request above
  • create: -X POST –data @createDB.json

In the create option we include a json file that defines everything for the database instance.

{
"serviceName": "test-hp",
"version": "12.1.0.2",
"level": "PAAS",
"edition": "EE_HP",
"subscriptionType": "HOURLY",
"description": "Example service instance",
"shape": "oc3",
"vmPublicKeyText": "ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAnrfxP1Tn50Rvuy3zgsdZ3ghooCclOiEoAyIl81Da0gzd9ozVgFn5uuSM77AhCPaoDUnWTnMS2vQ4JRDIdW52DckayHfo4q5Z4N9dhyf9n66xWZM6qyqlzRKMLB0oYaF7MQQ6QaGB89055q23Vp+Pk5Eo+XPUxnfDR6frOYZYnpONyZ5+Qv6pmYKyxAyH+eObZkxFMAVx67VSPzStimNjnjiLrWxluh4g3XiZ1KEhmTQEFaLKlH2qdxKaSmhVg7EA88n9tQDWDwonw49VXUn/TaDgVBG7vsWzGWkRkyEN57AhUhRazs0tEPuGI2jXY3V8Q00w3wW38S/dgDcPFdQF0Q== rsa-key-20160107",
"parameters": [
{
"type": "db",
"usableStorage": "20",
"adminPassword": "Test123_",
"sid": "ORCL",
"pdb": "PDB1",
"failoverDatabase": "no",
"backupDestination": "none"
}
]
}

The vmPublicKeyText is our id_rsa.pub file that we use to connect to the service. I did not include a backup space but could have. I did not in this example because we have to embed a password in this space and I did not want to show a service with username and password.

Overall, I prefer scripting everything and running this from a command line. Call me old school but sitting for hours and clicking through screens when I can script it and get a notification when it is done appeals to me.

Tomcat on Azure

Today we are going to install Tomcat on Microsoft Azure. In the past three days we have installed Tomcat on Oracle Linux using Bitnami and onto a raw virtual image as well as on Amazon AWS using a raw virtual image. Microsoft does not really have a notion of a MarketPlace like the AWS Commercial or Public Domain AMI Markets. It does have Bitnami and we could go through the installation on Azure just like we did the Oracle Compute Cloud. Rather than repeating on yet another platform, let’s do something different and look at how we would install Tomcat on Windows on Azure. The Linux installation would be no different than the Oracle Linux raw virtual machine install so let’s do something different. You can find Tomcat on Linux Instructions or Tomcat on Windows Instructions. To be honest we won’t deviate much from the second one so follow this or follow the instructions from Microsoft, they are basically the same.

The steps that we need to follow are

  • Create a virtual machine with Windows and Java enabled
  • Download and install Tomcat
  • open the ports on the Azure portal
  • open the ports on Windows

We start by loading a virtual machine in the Azure portal. Doing a search for Tomcat returns the Bitnami image as well as a Locker Tomcat Container. This might work but it does not achieve our desire for this exercise. We might want to look at a Container but for our future needs we need to be able to connect to a database and upload jar and war files. I am not sure that a Container will do this.

We search for a JDK and find three different versions. We select the JDK 7 and click Create.


In creating the virtual machine, we define a name for our system, a default login, a password (I prefer a confirmation on the password rather than just entering it once), our default way of paying, and where to place it is storage and which data center based on the storage we select. We go with the default East configuration and click OK.

Since we are cheap and this is only for demo purposes, we will select A0 Standard. The recommended is A1 Standard but it is $50 more per month and again this is only for demo purposes. After having played with the A0 Standard, we might be better off going with the A1 Standard. Yes, it is more expensive. The speed of the A0 shape is so painful that it is almost unusable.

We will want to open up ports 80, 8080, and 443. These will all be used for Tomcat. This can be done by creating an new security rule and adding port exceptions when we create the virtual machine. We can see this in the installation menu.

We add these ports and can click Create to provision the virtual machine



One of the things that I don’t like about this configuration is that we have three additional ports that we want to add. When we add them we don’t see the last two rules. It would be nice if we could see all of the ports that we define. We also need to make sure that we have a different priority for the port definition. The installation will fail if we assign priority 1000 to all of the ports.

Connection to the virtual machine is done through remote desktop. If you go to the portal and click on the virtual machine you will be able to connect to the console. I personally don’t like connecting to a gui interface but prefer a command line interface. You must connect with a username and password rather than a digital certificate.





The first thing that comes up with Windows 2012 server is the server management screen. You can use this to configure the compute firewall and allow ports 80, 8080, and 443 to go to the internet. This also requires going to the portal to enable these ports as network rules. You have two configurations that you need to make to enable port 8080 to go from your desktop, through the internet, get routed to your virtual machine, then into your tomcat application.

For those of you that are Linux and Mac snobs, getting Windows to work in Azure was a little challenging. Simple things like opening a browser became a little challenging. This is more a lack of Windows understanding. To get Internet Explorer to come up you first have to move your mouse into the far right of the screen.


At first it did not work for me because the Windows screen was a little larger than my desktop and I had to scroll all the way to the bottom and all the way to the right before the pop up navigation window comes up. When the window does come up you see three icons. The bottom icon is the configuration that allows you to get the the Control Panel to configure the firewall. The icon above it is the Microsoft Windows icon which gives you an option to launch IE. Yes, I use Windows on one of my desktops. Yes, I do have an engineering degree. No, I don’t get this user interface. Hovering over an empty spot on the screen (which is behind a scroll bar) makes no sense to me.

From this point forward I was able to easily follow the Microsoft Tomcat installation instructions. If you don’t select the JDK 7 Virtual Machine you can download it from java.com download. You then download the Tomcat app server. We selected Tomcat 7 for the download and followed the defaults. We do need to configure the firewall on the Windows server to enable ports 80, 8080, and 443 to see everything from our desktop browser. We can first verify that Tomcat is properly installed by going to http://localhost:8080 from Internet Explorer in the virtual image. We can then get the ip address of our virtual machine and test the network connections from our desktop by replacing localhost with the ip address. Below are the screen shots from the install. I am not going to go through the instructions on installing Tomcat because it is relatively simple with few options but included the screen shots for completeness.











In Summary, we could have followed the instructions from Microsoft to configure Tomcat. We could pre-configure the ports as suggested in this blog. We could pre-load the JDK with a virtual machine rather than manually downloading it. It took about 10-15 minutes to provision the virtual machine. It then took 5-10 minutes to download the JDK and Tomcat components. It took 5-10 minutes to configure the firewall on Windows and the port access through the Azure portal. My suggestion is to use a service like Bitnami to get a preconfigured system because it takes about half the time and enables all of the ports and services automatically.

installing Tomcat on AWS

In our last two entries we installed Tomcat on the Oracle Compute Cloud. We first installed the application using oracle.bitnami.com and the second we installed Linux using Oracle Compute Cloud then downloading tomcat.apache.org and configuring the network and startup scripts. In this blog we will do the same thing for Amazon AWS. Note that there are a few blogs that do the same thing.

We are going to cheat a little bit with AWS. Rather than configuring Linux, downloading Java and downloading Tomcat, we are going to go to the Amazon Marketplace and download an image that is already configured. This is similar to going through Bitnami but I thought it would be interesting to look at a different pre-configured instance and see how it differs from Bitnami. When we go to the marketplace we get the option of a community ami pool or a commercial ami pool. The selection is very diverse. I could not find anyone who pre-configured Tomcat on Oracle Enterprise Linux but did find Red Hat and Amazon Linux which are from the same codebase.

It is important to note that the commercial version does come with supplemental pricing on an hourly basis. This typically prices AWS as an option out of the running when compared to other cloud services.

We select an instance (the smallest since this is a demo of functionality) and go through the launch screens.

By default, the network configuration only opens up ssh and potentially port 80. We will need to add ports 8080 and 443. In hindsight we really don’t need to add port 8080 because the commercial version remaps the catalina configuration file to port 80 but we did anyway for completeness.

Adding the new ports looks like

Note that this is different from the Oracle Compute network setup. Amazon sets this up during the instance configuration while Oracle allows you to add this after the instance is created. Neither are good or bad, just different. You do need to scroll to the far right to see the Security Group definition and follow the links to modify the rules to allow another port. My first assumption was to go to the instance configuration menu at the top but all the network options were greyed out. You need to scroll to the far right to change the ports using the security group link. I initially did not see this because my fonts were too large and I did not realize that I had to scroll to the far right to see this.





Once we have the network configured, we can review and launch the instance. Note that we can use our own ssh keys to attach to the instance.


When we finish and confirm everything we should get an initialization screen. If the startup takes too long we will get a waiting screen. Once the instance is created we should see that it is running in the EC2 console.


Once the instance is started we can connect to it. We do this by looking up the ip address and connecting with ssh.


It is important to note that Tomcat is installed in /opt/tomcat7 and the startup scripts in /etc/rc3.d/S80tomcat7_1 are already setup.

We restart the service just to test the startup script and test the instance locally by getting the html from the command line and confirm that everything works from our desktop browser.

In summary, we were able to install and configure everything using the marketplace in less than 15 minutes. The configuration was similar to the Bitnami instance but it is important to note that there is an extra cost associated with this instance on an hourly basis. The Bitnami economics are done on a per instance charge. I, for example, pay $30/month to allow me to deploy three instances across multiple cloud vendors. Note the model is on a per instance and not a per hour basis. We could have gone through the exact same configuration that we did with the Oracle Compute Cloud instance by installing Linux then using the Tomcat website to download the binaries and install. The same websites, same tar files, and same configurations work since both are Linux based installs.

installing Tomcat on Oracle

In our last entry we installed Tomcat onto the Oracle Compute Cloud using Bitnami. Just as a reminder, it was easy, it was simple, and it took 15 minutes. In this entry we are going to go through the manual process and show what has to happen on the server side and what has to happen on the cloud side. The steps that we will follow are

  • Install Oracle Enterprise Linux on the Oracle Compute Cloud Service
  • ssh into the box and install/update
    • Java 7
    • Tomcat
    • iptables to open port 80 and 443
  • Start the service and verify localhost:8080 works
  • Configure the ports on the cloud side so that 80 and 443 pass through

Installing Oracle Enterprise Linux has been done in a previous blog. We won’t go through the screen shots for this other than to say that we called the box prsTomcat (as we did in the previous example) and requested OEL 6.6 with a 60 GB hard drive because this was the default installation and configuration. We selected the 60 GB hard drive because we had one preconfigured and it would reduce the creation time by not having to create and populate a new hard drive.

To ssh into the instance we need to go to the compute page and find the ip address. We need to login as opc so that we can execute sudo and install packages.


Once we have logged in we first need to verify that java is installed and configured. We do this with

java -version

This command verified that Java is properly installed. The next step is to download tomcat. To get the correct version and location to download it from we must go to tomcat.apache.org and figure out what version to install. This is a little confusing because there are numerous versions and numerous dot releases. We are looking for Tomcat 7 so we scroll down and download it from tar.gz binary distribution.

We look for Tomcat 7 and follow the download link.


Once we had downloaded the binary bundle we need to unzip this into a location that we want to run it from. In this example we are going to install it in /usr/local/apache-tomcat-version_number. This is done with

"w g e t " http://www-us.apache.org/dist/tomcat/tomcat-7/v7.0.68/bin/apache-tomcat-7.0.68.tar.gz
cd /usr/local
sudo tar xvzf /home/opc/apache-tomcat-7.0.68.tar.gz

Now that we have the binary downloaded, we have to start the server. This is done with the bin/startup.sh command. It is important to note that we will still need to install and configure an init script in /etc/rc3.d/S99tomcat to start and stop the service. This requires hand editing of the file to run the startup.sh script. Once we have Tomcat installed and running we can use “w g e t” to verify that the server is running

sudo /usr/local/apache-tomcat-7.0.68
"w g e t" http://localhost:8080


This should return the html page served by the Tomcat server. If it does not, we have an issue with the server starting or running.

Now that we have the server up and running, we need to update the iptables to add ports 80, 8080, and 443 as pass through ports. This is done by

sudo iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
sudo iptables -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
sudo iptables -A INPUT -p tcp -m tcp --dport 8080 -j ACCEPT
sudo service iptables restart

Once we have the ports properly running at the operating system layer we need to go back to our compute cloud console and create the security rules for these ports. This is done by going into the instance and clicking on the network tab at the top of the screen.

We open up port 80 first from the public internet to the instance.

We then open up port 443 similarly

The final configuration should look like

We also need to add port 8080 which is by default not installed and configured. We do this by defining a new Security Application from the Networking tab. We add port 8080 then we have to add it as a security rule.



From this point we should be able to look from our desktop to the cloud instance and see port 80, 8080, and 443. We can test port 443 by logging into the management console as we did with the bitnami configuration. We can test port 8080 by going to the default link for our server ip address.

In summary, it took 5-6 minutes to get Linux installed. It took 5-10 minutes to do the y um install based on how many packages were out of sync and needed updating. It took 4-5 minutes to open up the ports and reconfigure the network access. To get the same configuration we would have to edit the catalina.conf file and redirect the browser from port 8080 to port 80 as well as create a startup script to initialize the server at boot time. Overall this method took us about 50% longer to install and configured the exact same thing as we did with Bitnami. The benefits to doing the configuration ourselves is that we could script it with tools like Puppet and Chef. We could automate this easily and make sure it is done the same way every time. Doing it by hand and creating the instance, logging in, and using a graphic interface to configure everything leads to error and divergence as time goes on.

Note that the “w g e t” should be one word but again, our blogging software does not allow me to use that word in the blog. Grrr!

installing Tomcat through bitnami

This week we are going to focus on installing Tomcat on to cloud servers. Today we are going to take the easy route. We are going to use bitnami and look at timings of how long everything takes as well as the automatic configurations that it sets up. In previous lesions we talked about linking your cloud accounts to bitnami and will not repeat that instructions. For those that are new to public domain software, Tomcat is a public domain software package that allows you to host java applications similar to WebLogic. I won’t get in to a debate over which is better because we will be covering how to install WebLogic in a later blog. I will give you all the information that you need to make that decision of which is best for our company and implementation.

We login to our oracle.bitnami.com web site and verify our account credentials.

We want to launch a Tomcat server so we search for Tomcat and hover over the icon. When we hover over the icon the word Launch appears and we click on this button.

Once we click Launch we get the virtual machine configuration screen.

Things to note on this screen are

  • the name is what you want to add to identify the virtual macine
  • the cloud account identifies which data center, if it is metered or un-metered, and what shapes will be available to this virtual machine
  • the network is automatically configured for ports 80 and 443 and enabled not only in the cloud network security configuration but in the operating system as well
  • the operating system gives you the option but we default to OEL 6.7
  • we could increase the disk size and select the memory/cpu option but it does not show us the cost because bitnami does not know if your account is metered or un-metered which have different costing models.

After we click the create button we get an update that shows us how the installation is progressing. The installation took just under 15 minutes to finish everything, launch the instance, and show us the configuration.

Once everything finishes we get the ip address, the passwords, and the ssh keys that were used to create this virtual machine.

We are able to open the link to the Tomcat server by clicking on the Go To The Application on the top right of the screen. This allows us to see the splash screen as well as access the management console.

When you click on the Access my Application you get the detailed information about the Tomcat server. We can go to the management console and look at the configuration as well as bring the server up and down.



At this point we have a valid configuration that we can see across the internet. The whole process took 15 minutes and did not require any editing or configurations other than selecting the configuration and giving the virtual machine a name.

security diversion before going up the stack

This entry is going to be more of a Linux tutorial than a cloud discussion but it is relevant. One of the questions and issues that admins are faced is creation and deletion of accounts. With cloud access being something relatively new the last thing that you want is to generate a password with telnet access to a server in the cloud. Telnet is inherently insecure and any script kiddy with a desire to break into accounts can run ettercat and look for clear text passwords flying across an open wifi or wired internet connection. What you really want to do is login via secure ssh or secure putty is you are on windows. This is done with a public/private key exchange.

There are many good explanations of ssh key exchange, generating ssh keys, and using ssh keys. My favorite is a digitalocean.com writeup. The net, net of the writeup is that you generate a public and private key using ssh-keygen or putty-gen and upload the public file to the ~user/.ssh/authorized_keys location for that user. The following scripts should work on an Azure, Amazon, and Oracle Linux instance created in the compute shapes. The idea is that we initially created a virtual machine with the cloud vendor and the account that we created with the VM is not our end user but our cloud administrator. The next level of security is to create a new user and give them permissions to execute what they want to execute on this machine. For example, in the Oracle Database as a Service images there are two users created by default; oracle and opc. The oracle user has the rights to execute everything related to sqlplus, access the file systems where the database and backups are located, and everything related to the ora user. The opc user has sudo rights so that they can execute root scripts, add software packages, apply patches, and other things. The two users have different access rights and administration privileges. In this blog we are going to look at creating a third user so that we can have someone like a backup administrator login and copy backups to tape or a disk at another data center. To do this you need to execute the following instructions.

sudo useradd backupadmin -g dba
sudo mkdir ~backupadmin/.ssh
sudo cp ~oracle/.ssh/authorized_keys ~backupadmin/.ssh
sudo chown -R backupadmin:dba ~backupadmin
sudo chmod 700 ~backupadmin/.ssh

Let’s walk through what we did. First we create a new user called backupadmin. We add this user to the dba group so that they can perform dba functions that are given to the dba group. If the oracle user is part of a different group then they need to be added to that group and not the dba group. Next we create a hidden directory in the backupadmin directory called .ssh. The dot in front of the file denotes that we don’t want this listed with the typical ls command. The sshd program will by default look in this directory for authorized keys and known hosts. Next we copy a known authorized_keys file into the new backupadmin .ssh directory so that we can present a private key to the operating system as the backupadmin to login. The last two commands are setting the ownership and permissions on the new .ssh directory and all files under it so that backupadmin can read and write this directory and no one else can. The chown sets ownership to backupadmin and the -R says do everything from that directory down to the same ownership. While we are doing this we also set the group permissions on all files to the group dba. The final command sets permissions on the .ssh directory to read, write, and execute for the owner of the directory only. The zeros remove permissions for the group and world.

In our example we are going to show how to access a Linux server from Azure and modify the permissions. First we go to the portal.azure.com site and login. We then look at the virtual machines that we have created and access the Linux VM that we want to change permissions for. When we created the initial virtual machine we selected ssh access and uploaded a public key. In this example we created the account pshuff as the initial login. This account is created automatically for us and is given sudo rights. This would be our cloud admin account. We present the same ssh keys for all virtual machines that we create and can copy these keys or upload other keys for other users. Best practice would be to upload new keys and not replicate the cloud admin keys to new users as we showed above.

From the portal we get the ip address of the Linux server. In this example it is 13.92.235.160. We open up putty from Windows, load the 2016.ppk key that corresponds to the 2016.pub key that we initialized the pshuff account with. When asked for a user to authenticate with we login as pshuff. If this were an Oracle Compute Service instance we would login as opc since this is the default account created and we want sudo access. To login as backupadmin we open putty and load the ppk associated with this account.

When asked for what account to login as we type in backupadmin and can connect to the Linux system using the public/private key that we initialized.

If we examine the public key it is a series of randomly generated text values. To revoke the users access to the system we change the authorized_keys file to a different key. The pub file looks like

if we open it in wordpad on Windows. This is the file that we uploaded when we created the virtual machine.

To deny access to backupadmin (in the case of someone leaving the organization or moving to another group) all we have to do is edit the authorized_keys file as root and delete this public key. We can insert a different key with a copy and paste operation allowing us to rotate keys. Commercial software like key vaults and key management systems allow you to do this from a central control point and update/rotate keys on a regular basis.

In summary, best practices are to upload a key per user and rotate them on a regular basis. Accounts should be created with ssh keys and not password access. Rather than copying the keys from an existing account it would be an upload and an edit. Access can be revoked by the root user by removing the keys or from an automated key management system.

next generation of compute services

Years ago I was a systems administrator at a couple of universities and struggled making sure that systems were operational and supportable. The one thing that frustrated me more than anything else was how long it took to figure out how something was configured. We had over 100 servers in the data center and on each of these server we had departmental web servers, mail servers, and various other servers to serve the student and faculty users. We standardized on an Apache web server but there were different versions, different configurations, and different additions to each one. This was before virtualization and golden masters became a trendy topic and things were built from scratch. We would put together Linux server with Apache web servers, PHP servers, and MySQL. These later became called LAMP servers. Again, one frustration was the differences between the different versions, how they were compiled, and how they were customized to handle a department. It was bad enough that we had different Linux versions but we had different versions of every other software combination. Debugging became a huge issue because you first had to figure out how things were configure then you had to figure out where the logs were stored and then could start looking at what the issue was.

We have been talking about cloud compute services. In the past blogs we have talked about how to deploying an Oracle Linux 6.4 server onto compute clouds in Amazon, Azure, and Oracle. All three look relatively simple. All three are relatively robust. All three have advantages and disadvantages. In this blog we are going to look at using public domain pre-compiled bundles to deploy our LAMP server. Note that we could download all of these modules into out Linux compute services using a yum install command. We could figure out how to do this or look at web sites like digitalocean.com that go through tutorials on how to do this. It is interesting buy I have to ask why. It took about 15 minutes to provision our Linux server. Doing a yum update takes anywhere from 2-20 minutes based on how old you installation is and how many patches have been released. We then take an additional 10-20 minutes to download all of the other modules, edit the configuration files, open up the security ports, and get everything started. We are 60 minutes into something that should take 10-15 minutes.

Enter stage left, bitnami.com. This company does exactly what we are talking about. They take public domain code and common configurations that go a step beyond your basic compute server and provision these configurations into cloud accounts. In this blog we will look at provisioning a LAMP server. We could have just as easily have configured a wiki server, tomcat server, distance education moodle server, or any other of 100+ public domain configurations that bitmai supports.

The first complexity is linking your cloud accounts into the bitnami service. Unfortunately, the accounts are split into three different accounts; oracle.bitnami.com, aws.bitnami.com, and azure.bitnami.com. The Oracle and Azure account linkages are simple. For Oracle you need to look up the rest endpoint for the cloud service. First, you go to the top right, click the drop down to do account management.

From this you need to look up the rest endpoint from the Oracle Cloud Console by clicking on the Details link from the main cloud portal.

Finally, you enter the identity domain, username, password, and endpoint. With this you have linked the Oracle Compute Cloud Services to Bitnami.

Adding the Azure account is a little simpler. You go to the Account – Subscriptions pull down and add account.

To add the account you download a certificate from the Azure portal as described on the bitnami.com site and import it into the azure.bitnami.com site.

The Amazon linkage is a little more difficult. To start with you have to change your Amazon account according to Bitnami Instructions. You need to add a custom policy that allows bitnami to create new EC2 instances. This is a little difficult to initially understand but once you create the custom policy it becomes easy.

Again, you click on the Account – Cloud Accounts to create a new AWS linkage.

When you click on the create new account you get an option to enter the account name, shared key, and secret key to your AWS account.

I personally am a little uncomfortable providing my secret key to a third party because it opens up access to my data. I understand the need to do this but I prefer using a public/private ssh key to access services and data rather than a vendor provided key and giving that to a third party seems even stranger.

We are going to use AWS as the example for provisioning our LAMP server. To start this we go to http://aws.bitnami.com and click on the Library link at the top right. We could just as easily have selected azure.bitnami.com or oracle.bitnami.com and followed this exact same path. The library list is the same and our search for a LAMP server returns the same image.

Note that we can select the processor core count, disk size, and data center that we will provision into. We don’t get much else to choose from but it does the configuration for us and provisions the service in 10-15 minutes. When you click the create key you get an updated screen that shows progress on what is being done to create the VM.

When the creation is complete you get a list of status as well as password access to the application if there were a web interface to the application (in this case apache/php) and an ssh key for authentication as the bitnami user.

If you click on the ppk link at the bottom right you will download the private ssh key that bitnami generates for you. Unfortunately, there is not a way of uploading your own keys but you can change that after the fact for the users that you will log in as.

Once you have the private key, you get the ip address of the service and enter it into putty for Windows and ssh for Linux/Mac. We will be logging in as the user bitnami. We load the ssh key into the SSH – Auth option in the bottom right of the menu system.

When we connect we will initially get a warning but can connect and execute common commands like uname and df to see how the system is configured.

The only differences between the three interfaces is the shapes that you can choose from. The Azure interface looks similar. Azure has fewer options for processor configuration so it is shown as a list rather than a sliding scale that changes the processor options and price.

The oracle.bitnami.com create virtual machine interface does not look much different. The server selection is a set of checkboxes rather than a radio checkbox or a sliding bar. You don’t get to check which data center that you get deployed into because this is tied to your account. You can select a different identity domain which will list a different data center but you don’t get a choice of data centers as you do with the other services. You are also not shown how much the service will cost through Oracle. The account might be tied to an un-metered service which comes in at $75/OCPU/month or might be tied to a metered service which comes in at $0.10/OCPU/hour. It is difficult to show this from the bitnami provisioning interface so I think that they decided to not show the cost as they do with the other services.

In summary, using a service like bitnami for pre-configured and pre-compiled software packages is the future because it has time and cost advantages. All three cloud providers have marketplace vendors that allow you to purchase commercial packages or deploy commercial configurations where you bring your own license for the software. More on that later. Up next, we will move up the stack and look at what it takes to deploy a the Oracle database on all three of these cloud services.

Oracle Linux on Amazon AWS

In this entry we are going to create a Linux 6.4 virtual machine on Amazon AWS EC2. In our last entry we did this on the Microsoft Azure using a single processor instance and 1.75 GB of RAM. The installation took a few steps and was relatively easy to install. We will not look at how to create an Amazon account but assume that you already have an account. The basic AWS console looks like the image below

When we click on the EC2 console instance it allows us to look at our existing instances as well as create new ones.

Clicking on the “Launch Instance” button allows us to start the virtual machine instance creation. We are given a choice of sources for the virtual machine. The default screen does not offer Oracle Linux as an option so we have to go to the commercial or community screens to get OEL 6.x as an option.

It is important to note that the commercial version has a surcharge on an hourly basis. If we search on Oracle Linux we get a list of different operating system versions and database and WebLogic installations. The Orbitera version in the commercial version adds a hefty surcharge of $0.06 per hour for our instance and gets more expensive on an hourly basis as the compute shapes get larger. This brings the cost to 7x times that of the Oracle Compute Service and 5x the times of the Microsoft Azure instance.

The community version allows us to use the same operating system configuration without the surcharge. The drawback to this option is trustability on the configuration as well as repeatability. The key advantage over the commercial version is that it has version control and will be there a year from now. The community version might or might not be there in a year and if you need to create a new virtual machine based on something that you did a year ago might or might not be there. On the flip side, you can find significantly older versions of the operating system in the community version that you can not in the commercial version.

Given that I am cheap (and funding this out of my own pocket) we will go through the community version to reduce the hourly cost. The main problem with this option is that we installed Oracle Linux 6.4 when installing on Oracle Compute Cloud Service and Microsoft Azure. On Amazon AWS we have to select Oracle Linux 6.5 since the 6.4 version is not available. We could select 6.6 and 6.3 but I wanted to get as close to 6.4 as possible. Once we select the OS version, we then have to select a processor shape.

Note that the smaller memory options are not available for our selection. We have to scroll down to the m3.medium shape with 1 virtual processor and 3.75 GB of RAM as the smallest configuration.

The configuration screen allows us to launch the virtual machine into a virtual network configuration as well as different availability zones. We are going to accept the defaults on this screen.

The disk selection page allows us to configure the root disk size as well as alternate disks to attach to the services. By default the disk selection for our operating system is 40 GB and traditional spinning disk. You can select a higher speed SSD configuration but there are additional hourly charges for this option.

The tags screen is used to help you identify this virtual machine with projects, programs, or geographical affiliations. We are not going to do anything on this screen and skip to the port configuration screen.

The port screen allows us to open up security ports to communicate with the operating system. Note that this is an open interface that allows us to open any ports that we desire and provide access to ports like 80 and 443 to provide access to web services. We can create white lists or protected networks when we create access points or leave everything open to the internet.

We are going to leave port 22 as the only port open. If we did open other ports we would need to change the iptables configuration on the os instance. We can review the configuration and launch the instance on the next screen.

When we create the instance we have to select a public and private key to access the virtual machine. You had to previously create this instance through the AWS console.

Once we select the key we get a status update of the virtual machine creation.

If we go to the EC2 instance we can follow the status of our virtual machine. In this screen shot we see that the instance is initializing.

We can now connect using putty or ssh to attach to the virtual machine. It is important to note that Amazon uses a different version of the private key. They use the pem extension which is just a different version of the ppk extension. There are tools to convert the two back and forth but we do need to select a different format when loading the private key using putty on Windows. By default the key extension that it looks for is ppk. We need to select all files to find the pem keys. If you follow the guidelines from Amazon you can convert the pem key to a ppk key and access the instance as was done previously.

It is important to note that you can not login as oracle but have to login as root. To enable logging in as oracle you will need to copy the public key into the .ssh directory in the /home/oracle directory. This is a little troubling having the default login as root and having to enable and edit files to disable this. A security model that allows you to login as oracle or opc and sudo to root is much preferable.

In summary, the virtual machine creation is very similar to the Oracle Compute Cloud Service and Microsoft Azure Cloud Service. The Amazon instance was a little more difficult to find. Oracle installations are not the sweet spot in AWS and other Linux instances are preferred. The ssh keys are a little unusual in that the EC2 instance wants a different format of the ssh keys and if Amazon generates them for you it requires a conversion utility to get it into standard format. The cost of the commercial implementation drives the price almost to cost prohibitive. The processor and memory configuration are similar to the other two cloud providers but I was able to try a 1 processor and 1 GB instance and it failed due to insufficient resources. We had to fall back to a much larger memory footprint for the operating system to boot.

All three cloud vendors need to work on operating system selection. When you search for Oracle Linux you not only get various versions of the operating system but database and weblogic server configurations as well. The management consoles are vastly different between the three cloud vendors as well. It is obvious what the background and focus is of the three companies. Up next, using bitnami to automate service installations on top of a base operating system.