Oracle Linux on Microsoft Azure

In this entry we are going to create a Linux 6.4 virtual machine on Microsoft Azure. In our last entry we did this on the Oracle Compute Cloud using a single processor instance and 15 GB of RAM. The installation took a few steps and was relatively easy to install. We will not look at how to create an Azure account but assume that you already have an account. The basic Azure console looks like the image below

From this console we can either create a Virtual Machine or Virtual Machine (classic). From the main console click on the “Virtual Machine” button on the left side of the screen. We will walk down this path rather than the classic mode for this installation. After you click on the virtual machine menu item and the “+ Add” button at the top left we can select the operating system type.

From this screen we can search for an installation type. If we type “oracle” in the search field we get over 20 entries provided by the Oracle Corporation. The first few are Linux only installations. The next few are Database and Java/WebLogic installations on Linux or Windows.

For our test, we will select the Oracle Linux 6.4 to match what we did in the previous blog. With this selection we get another screen that provides links to an informational page and more information. Notice that the deployment model only allows us to create the virtual machine in the classic mode. Had we gone down the Virtual machine (classic) menu item at the start we would end up in the same place with this selection. Clicking on the “create” button takes us to the next screen where we define the properties of the virtual machine. The basic information that we need are the virtual machine name, a user name to log in as, basic security access information (password or ssh key access), and compute size.

Rather than providing a username and password to access the virtual machine, we are going to select an ssh key. We will use the same key that we used in the last blog and copy the public ssh key that we created with puttygen and upload a text copy of the public ssh key. It is important to note that we can create the virtual machine with a password but to be honest we don’t recommend it. You can do this but security becomes a huge issue. The password that you enter does not check for viability or security. I was able to type in the traditional “Welcome1” password and the system accepted it as a viable password. Again, it is not recommended to do this but I was testing to see if I could enter a simple password that is easily found in the dictionary.

When we click on the Pricing Tier we can select the compute shape. When we first clock on this we get three recommended choices. These choices are all single core options with a small memory footprint. It is important to note that all are IO limited at 500 IOs per second and all have the option for a load balancer to be put in front of the virtual machine. The key difference is the processor type. The A processor is a lower speed, older processor that does not have as much power. The D processor is a higher speed, newer processor. Both options are lower clock speeds than the Oracle compute shape which is a 3.0 Ghz Xeon processor. The memory configuration is significantly lower with 1.75 or 3.5 GB when compared to 15 GB or 30 GB offered by the Oracle Compute Service.

If we want to explore more options we can click on the “View All” option at the top right. This allows us to look at over 60 different configurations that have different core counts, memory configurations, and disk options.

For our exercise we are going to go with the recommended A1 Standard configuration with 1 virtual processor and 1.75 GB of RAM.

The final step that we need to look at are the network, disk, and availability zone configurations. To be honest, we could accept the defaults and not configure these options.

If we look at the optional configurations, we can configure an availability zone. This allows us to replicate services between user defined zones. For this instance we are going to use the standalone virtual machine configuration.

The next configuration option allows us to define the local network configurations, which subnet it will belong to, and the server name. We recommend not changing the subnet information because this could cause issues if you enter the wrong network or enter an invalid subnet that does not have a dhcp server.

We can select a reserved ip address rather than a dynamically allocated ip address. It is important to enter this information correctly because you could step on an existing server on the internet and not be able to get to your virtual machine. It is also important to map the static ip address to the domain name that you have reserved through naming servers on the internet. We will use the default dhcp rather than use a reserved ip address.

We could attach alternate disks to this instance. For example, if we wanted to pre-load the Oracle database binary, we could mount the disk as a secondary disk and attach it to our instance. We will not do this as part of our exercise but go with the default boot disk to show how to create a basic virtual image.

We can also configure the ports that are open to the virtual machine. It is important to note that by default ssh is open and available. We could open port 80 or port 443 if we wanted to provide web access to this machine. We would also have to change the iptables configuration on the operating system to gain access to these services.

Finally, we can add options to the Linux operating system. This would be similar to selecting the Orchestration option on the Oracle Compute Service. My recommendation is to not select this but do this with the apt-get or yum installation method using post configuration utilities and methods.

The final options that we have deal with how we pay for the service and which data center we drop the virtual machine into. We will accept the defaults of “Pay as you go” and “US East” data center for our exercise.

When we click “Create” we are put back on the main portal screen with an update window showing progress. The create takes a few minutes and will be dropped into the Virtual Machine window when finished.

You can click on the bell shaped icon on the top right to see the progress bar and click on the progress bar to look at the ongoing status of creation. Note that the status is “creating” and will change when the virtual machine is finished creating.

Once the status turns to “running” we can click on the machine name and get more detailed information like the ip address assigned and more detailed data on the final configuration.

Once everything is running we can login via putty on Windows or ssh on Linux or Mac. We get the ip address from the status page on the virtual machine. We use the username that we entered when we created the Linux instance. We use the private ssh keys to connect to the instance just like the Oracle Compute Service.

Once we accept the keys we can login and verify the os version and disk shape created.

In summary, the difference between the Oracle Compute Cloud and the Microsoft Azure Cloud is not that different. The selection of the operating system is much more of a GUI experience with Microsoft and the Oracle shapes are much larger when it comes to memory. The Microsoft options have more options on the low end and high end but the Oracle solution is designed for the Oracle Database and WebLogic servers. It took about the same amount of time to create the two virtual machines. Security is a little tighter with Oracle but can be made the same between the two. Azure gives you the option of using a username and password and allows you to open any port that you want into the virtual machine. Given that these instances are on the public internet we recommend a tighter security configuration.

Oracle Linux on Oracle Compute Cloud

In this blog we are going to look at creation of an Infrastructure as a Service foundation using Compute as a Service and Storage as a Service to create an Oracle Linux instance. To start with we begin by logging into http://cloud.oracle.com and entering our Identity Domain, user name, and password. In this example we are connecting to metcsgse00026 as cloud.admin for the username.

If we look to the right of the Compute Cloud Service header we see a “Service Console” tab. Clicking on this allows us to create a new virtual machine by clicking on the “Create Instance” button. Not all accounts will have the create instance button. Your account needs to have the funding behind generic compute and the ability to consume either metered or un-metered IaaS options.

Note that we have two virtual machines that have previously been created. The first listed is a database service that was created. The compute infrastructure is automatically created when we create a database as a service instance. The second listed is a Java service that was created through the Java Service console. The compute infrastructure was also created for the JaaS option. We can drill into these compute services to look at security, networking, and ip addresses assigned.

To create a virtual machine we click on the “Create Instance” button which takes us to the next screen. On this screen we enter the name of the virtual machine that we are creating, a description label, the operating system instance and type, the shape of the instance. By shape we mean the number of processors and memory since this is how compute shapes are priced.

To select the different types of operating systems, we can enter a “*” into the Image type and it lists a pull down of operating system types. You can scroll down to select different configurations and instances. In the screen shot below we see that we are selecting OEL 6.4 with a 5 GB root directory. The majority of the images are generic Linux instances with different disk configurations, different software packages installed, and different OS configurations.

The next step is to select the processor and memory size. Again, this is a pull down menu with a pre-configured set of options. We can select from 1, 2, 4, 8, and 16 virtual processors and either 15 GB of RAM or 30 GB of RAM per processor. These options might be a bit limiting for some applications or operations but are optimized and configured for database and java servers.

In this example we selected a 1 virtual processor, 15 GB of RAM, 5 GB of disk for the operating system, and Oracle Linux 6.4 as the operating system. We can enter tags so that we can associate this virtual machine with a target, production environment, system, or geographic location consuming the resources.

At this time we are not selecting any custom attributes and not using Orchestration to provision services, user accounts, passwords, or other services into our virtual machine. We click the “Next” button at the top of the screen to go to network configurations.

In the network configuration we can accept the defaults and have an ip address assigned to us. If we have an ip address on reserve we can consume that reserved address and even assign a name to it to resolve to linux6.mydomain.net if we wanted to map this to an internet name. In this example we just accept the defaults and try not to get too fancy on our first try. This will create an ip address for our server, open port 22 for ssh access, and allow us to network it to other cloud services inside our instance domain with local network routing.

The next step is to configure a disk to boot from. We are presented with the option of using a pre-configured existing disk or creating a new one. The list that we see here is a list of disks for the database and java servers that we previously created. We are going to click on the create new check box and have the system create the disk for us.

The storage property pull down allows us to select the type of disk that we are going to use. If we are trying to boot from this disk, we should select the default option. If we were installing a database we would select something like the iSCSI option to attach as the data or fast recovery disk options.

The final step is to upload the public key of our ssh key pair. This is probably the biggest differential between the three services. Amazon requires that you use their shared and secret key that they generate for you. Microsoft allows you to create a service without an ssh key and use a username and password to connect to the virtual machine. Oracle requires that you use the ssh public-private key that you generate with puttygen or ssh-keygen. The public key is uploaded during this creation time (or selected if you have previously uploaded the key). The private key is presented via putty or ssh connection to the server once it is created. The default users that are created in the Oracle instances are the oracle account that is part of the orainst group and the opc account that has sudo privileges.

Once we have everything entered, we click on next and review the screen. Clicking on the “Create” button will create a compute instance with the configuration and specifications that we requested. In this example we are creating a Linix 6.4 instance onto a 1 OCPU machine with 15 GB of memory and attaching a 5 GB disk drive for the operating system.

As the system is provisioning, it updates us on progress


When everything is done we can view the details of the virtual machine and see the ip address, which key was used, and how the service is configured.

Before we can attach to the server, we need to open up the ssh port (port 22). This is done by going into the Network tab and adding a “Security Rule”. This rule needs to be defined as public internet access to our default security rule since we accepted the default network rules as our network protocol in the creation of the virtual machine.

Note in this example we selected ssh as the protocol, public internet as the source, and default as the destination. With this we can now connect using putty from our Windows desktop. We need to configure putty with the ip address of our virtual machine and our private key as the ssh protocol for connecting. We could create a tunnel for other services but in this example we are just trying to get a command line login to the operating system.


Note that we can confirm that we are running Linux 6.4 and have a 5 GB hard drive for the operating system.
The whole process takes a few minutes. This is relatively fast and can be scripted with curl commands. More on that later.

compute as a service

In an ongoing learning journey of trying to understand cloud services, I got accounts on the Amazon cloud, Azure cloud, and Oracle cloud. I thought I would start with the basics and grow from there. As an exercise, let’s create a Linux server with no software installed in each three platforms. Apart from creating an account on all three platforms (which was non-trivial) creation of a compute server on each platform was relatively simple.

Amazon Web Services

The initial look and feel of the console starts the experience. It does show what the three companies are focused on. Let’s start with Amazon (it is first in the alphabet and I had to pick something). The console lists a wide variety of services and things that you can purchase. Without doing research I would not have known that S3 stands for storage and EC2 stands for compute.

I get what a virtual server in the cloud is but how does that differ from a docker container and why should I care? Why should I care about managing Web Apps if I am just looking for raw compute? Why do I want to run code outside of a virtual machine? Which one should I choose? We are not going to go into depth on any of these subjects. If we are just looking at running a Linux instance, the simple EC2 should be adequate. We can install Docker as a package in our Linux instance to help us control how much of a processor is allocated to a service or program. We can install applications like Tomcat or WebLogic to run Web Apps. Linux gives us the foundation to do all of this with packages. Lambda is a totally different beast in that I can run code snippets to do things like voice command interpretation for an Amazon Echo or asynchronous events from devices and launch web sites or REST apis without having to install, manage, and configure an operating system. The rest of the world calls this a Node.js function and offers it as a separate service as well. I realize that I am oversimplifying this but you have to know what you are trying to accomplish before you start to create your first compute instance in the cloud.

Microsoft Azure Services

The Azure services are a little different in that they focus more on the user creation of virtual machines, SQL server, and some app services. Creation of a virtual image is relatively easy and it makes sense what you are doing. The console is relatively simple and clean with more options on the second page instead of the first page as is done with Amazon.

As you click on the Add button for Virtual Image you get an expanded set of operation system options and configurations.

Note that you can search for Oracle Linux and get a listing of various versions of the database. The virtual machine is easy to configure and create using the portal. If, however, you want to configure and create this via a command line, you need to download the PowerShell and run everything inside the application. The command line is Microsoft specific and difficult to port and migrate to other services. With Amazon and Oracle you can easily use RESTapi calls to provision and create services. Microsoft makes it a little more difficult to generically script but easily do this in their shell and language.

Oracle Cloud Services

The Oracle cloud compute services are new to the market. In the past compute services have been sold in bundles of 500 processors but have recently been made available in single processor consumption models. The cloud console has a different look and feel because the focus of the cloud services are more on the PaaS layer and less on the compute and storage layers.

Note that the screen shot starts with the storage and compute services but scrolling down shows database, java, SOA, and more PaaS layers.


To create a virtual image, you need to click on the compute cloud service – Service Console and it will allow you create an instance. The operating system selection is not as graphical or user friendly as the Microsoft interface but does list a variety of operating system options and configurations.

In conclusion, all three of these cloud consoles allow you to create a virtual image. In the next blog entry we will walk through the steps needed to create a Linux 6 instance on each of the three cloud platforms. We will not talk about how to create accounts. We will assume that you can find account setup and creation on your own. All three sites offer “try me” services that give at least 30 days evaluations. The eventual recommendation will be to use services like bitnami.com that takes public domain services like LAMP servers, Wiki engines, blog servers, and other public domain tools. The Bitnami site allows you to select a pre-configured instance and provision it into all three of these cloud services along with a few other cloud providers.

printing from apex using pl/sql

As part of our exercise to convert an excel spreadsheet to APEX, we ran into some simple logic that required more logic and less database select statements. The question comes up how to do an if – then – else statement in APEX. In this example we are trying to advise between using compute on demand or dedicated compute resources. With dedicated compute resources, we can purchase 50 OCPUs as an aggregate. With dedicated we can go as low as 1 OCPU. If we request 14 systems with 4 OCPUs it might be better to request 50 OCPUs.

A typical question form would look like the following image allowing us to ask processor shape as well as quantity. If the total mount exceeds, 50 processors, we output a message suggesting dedicated compute rather than compute on demand.

To get this message on the screen, we first had to pull in the questions that we ask using variables. In this example, we read in the UNMETERED_COMPUTE_SHAPE which is a pick list that allows you to select (1, 2, 4, 8, or 16) OCPU shapes. You can also type in a quantity number into UNMETERED_COMPUTE_QUANTITY. The product of these two values allows us to suggest dedicated or compute on demand for economic reasons.

To execute pl/sql commands, we have to change the content type. To create this area we first create a sub-region. We change the name of the sub-region to represent the question that we are trying to answer. For this example we use the title “Compute on Demand vs Dedicated Compute” as the sub-region header. We then change the type to “PL/SQL Dynamic Content”. Under this we can then enter our dynamic code. The sub-region looks like

If you click on the expand button it opens a full editor allowing you to edit the code. In our example we are going to read the variables :UNMETERED_COMPUTE_SHAPE and :UNMETERED_COMPUTE_QUANTITY. Notice the colon in front of these names. This is how we treat the values as variables read from APEX. The code is very simple. It starts with a begin statement followed by an if statement. The if statements looks to see if we are allocating more than 50 processors. We then output a statement suggesting dedicated or compute on demand using the htp.p function call. This call prints what is passed to it to the screen. The code should look like
.

Overall, this is a simple way of outputting code that requires control flow. In the previous example we used a select statement to output calculations. In this example we are outputting different sections and different recommendations based on our selections. We could also set variables that would expose or hide different sub-regions below this section. This is done by setting :OUTPUT_VARIABLE = desired_value. If we set the value inside the pl/sql code loop, we can hide or expose sections as we did in a previous blog by setting a value from a pull down menu.

The code used to output the recommendation is as follows

BEGIN
if (:UNMETERED_COMPUTE_SHAPE * :UNMETERED_COMPUTE_QUANTITY > 50) THEN
htp.p('You might consider dedicated compute since you have '
|| :UNMETERED_COMPUTE_SHAPE * :UNMETERED_COMPUTE_QUANTITY
|| ' OCPUs which is greater than the smallest dedicated compute of 50 OCPUs');
else
htp.p('Compute on Demand for a total of '
|| :UNMETERED_COMPUTE_SHAPE * :UNMETERED_COMPUTE_QUANTITY || ' OCPUs');
end if;
END;

converting excel to apex

I am trying to go through the exercise of converting an excel spreadsheet into apex and have stumbled across a few interesting tricks and tidbits.

One thing that I have noted is that stuff done in a spreadsheet can be automated via navigation menus in apex. I talk about this in another blog on how to create a navigation system based on parts of a service that you want to get you to the calculation that you need. This is much better if you don’t really know what you want and need to be lead through a menu system to help you decide on the service that you are looking for.

To create a calculator for metered and un-metered services in a spreadsheet requires two workbooks. You can tab between the two and enter data into each spreadsheet. If something like a pricelist is entered into a unique spreadsheet, static references and dynamic calculations can be easily. For example, we can create a workbook for archive – metered storage services and a workbook for archive – unmetered services which will be blank since this is not a service that is offered. If we create a third workbook called pricelist, we can enter the pricing for archive services into the pricelist spreadsheet and reference it from the other sheets. For archive cloud services you need to answer four basic questions; how many months, how much you will start archiving, how much you will end up with, and how much do we expect to read back during that period. We should see the following as questions

How Many Months?cell F6Initial Storage Capacitycell F7Final Storage CapacityCell F8Retrieval FactorCell F9

The cost will be calculated as

Storage Capacity((F8+F7+((F8-F7)/F6))*F6/2*price_of_archive_per_month)/F6((F8+F7+((F8-F7)/F6))*F6/2*price_of_archive_per_month)Retrieval Cost(((F8+F7+((F8-F7)/F6)/2)*(F9/100))*price_of_archive_retrieval/F6(((F8+F7+((F8-F7)/F6)/2)*(F9/100))*price_of_archive_retrievalOutbound Data Transfersumifs(table lookup, table lookup, …)sumifs(table lookup, table lookup,…*F6

In Apex, this is done a little differently with a sequence of select statements and formatting statements to get the right answer

select
'   sub-part: ' || PRICELIST.PART_NUMBER ||
' - Archive Storage Capacity           ' as Description,
to_char(PRICELIST.PRICE*1000*(:FINAL_ARCHIVE+:INITIAL_ARCHIVE+((:FINAL_ARCHIVE-:INITIAL_ARCHIVE)/12))*12/2, '$999,990') as PRICE
from PRICELIST PRICELIST
where PRICELIST.PART_NUMBER = 'B82623'
UNION
select
'   sub-part: ' || PRICELIST.PART_NUMBER ||
' - Archive Retrieval           ' as Description,
to_char(PRICELIST.PRICE*1000*(:FINAL_ARCHIVE+:INITIAL_ARCHIVE+((:FINAL_ARCHIVE-:INITIAL_ARCHIVE)/12))*12/2*(:RETRIEVE_ARCHIVE/100), '$999,990') as PRICE
from PRICELIST PRICELIST
where PRICELIST.PART_NUMBER = 'B82624'
UNION
select
'   sub-part: ' || PRICELIST.PART_NUMBER ||
' - Archive Deletes           ' as Description,
to_char(PRICELIST.PRICE*1000*(:FINAL_ARCHIVE+:INITIAL_ARCHIVE+((:FINAL_ARCHIVE-:INITIAL_ARCHIVE)/12))*12/2*(:DELETE_ARCHIVE/100), '$999,990') as PRICE
from PRICELIST PRICELIST
where PRICELIST.PART_NUMBER = 'B82629'
UNION
select
'   sub-part: ' || PRICELIST.PART_NUMBER ||
' - Archive Small Files           ' as Description,
to_char(:SMALL_ARCHIVE, '$999,990') as PRICE
from PRICELIST PRICELIST
where PRICELIST.PART_NUMBER = 'B82630'
UNION
select
'   sub-part: ' || PRICELIST.PART_NUMBER ||
' - Outbound Data Transfer           ' as Description,
to_char(PRICELIST.PRICE*1000*(:FINAL_ARCHIVE+:INITIAL_ARCHIVE+((:FINAL_ARCHIVE-:INITIAL_ARCHIVE)/12))*12/2*(:RETRIEVE_ARCHIVE/100), '$999,990') as PRICE
from PRICELIST PRICELIST
where PRICELIST.PART_NUMBER = '123456'
UNION
select
'   Total:' as Description,
to_char(sum(price), '$999,990') as Price
from (
select   PRICELIST.PRICE*1000*(:FINAL_ARCHIVE+:INITIAL_ARCHIVE+((:FINAL_ARCHIVE-:INITIAL_ARCHIVE)/12))*12/2 as price from PRICELIST
where pricelist.part_number = 'B82623'
UNION
select PRICELIST.PRICE*1000*(:FINAL_ARCHIVE+:INITIAL_ARCHIVE+((:FINAL_ARCHIVE-:INITIAL_ARCHIVE)/12))*12/2*(:RETRIEVE_ARCHIVE/100) as price from PRICELIST
where pricelist.part_number = 'B82624'
UNION
select PRICELIST.PRICE*1000*(:FINAL_ARCHIVE+:INITIAL_ARCHIVE+((:FINAL_ARCHIVE-:INITIAL_ARCHIVE)/12))*12/2*(:DELETE_ARCHIVE/100) as price from PRICELIST
where pricelist.part_number = 'B82629'
UNION
select PRICELIST.PRICE*1000*(:FINAL_ARCHIVE+:INITIAL_ARCHIVE+((:FINAL_ARCHIVE-:INITIAL_ARCHIVE)/12))*12/2*(:RETRIEVE_ARCHIVE/100) as price from PRICELIST
where pricelist.part_number = '123456'
);

The variables :INITIAL_ARCHIVE replaces F7, :FINAL_ARCHIVE replaces F8, and :RETRIEVE_ARCHIVE replaces F9. Rather than referring to the pricelist spreadsheet, we enter the pricing information into a database and do a select statement with the part_number being the key for the lookup. This allows for a much more dynamic pricebook and allows us to update and add items without risk of breaking the spreadsheet linkages. We can also use REST apis to create and update pricing using an outside program to keep our price calculator up to date and current. Using a spreadsheet allows users to have out of date versions and there really is not any way of communicating to users who have downloaded the spreadsheet that there are updates unless we are all using the same document control system.

Note that we can do running totals by doing a sum from a select … union statement. This allows us to compare two different services like Amazon Glacier and Oracle Archive easily on the same page. The only thing that we need to add is the cost of Glacier in the database and generate the select statements for each of the Glacier components. We can do this and use a REST api service nightly or weekly to verify the pricing of the services to keep the information up to date.

The select statements that we are use are relatively simple. The difficult part is the calculation and formatting out the output. For the bulk of the select statements we are passing in variables entered into a form and adding or multiplying values to get quantities of objects that cost money. We then look up the price from the database and print out dollar or quantity amounts of what needs to be ordered. The total calculation is probably the most complex because it uses a sum statement that takes the results of a grouping of select statements and reformats it into a dollar or quantity amount.

An example of the interfaces would look like

a traditional spreadsheet

and in Application Express 5.0

pulling X-Auth-Token from login

I am a big scripting guy. I believe in automating as much as possible and having a program do as much as possible and me typing as little as possible. I find it easier to use command lines than drag and drop interfaces. I have been struggling with how to script the REST apis for Oracle Cloud Services and wanted to get some feedback on different ways of doing this. I wanted to script creation of a database for some workshops that I typically give. The first step is creating the storage containers for the database backup.

Realize that the blogging software that is used does not allow me to type in “c url” without the space. If you see “c url” somewhere in this text, take out the space.

Most of the information that I got is from an online tutorial around creating storage containers. I basically boiled this information down and customized it a little to script everything.

First, authentication can be obfuscated by hiding the username and password in environment variables. I typically use a Mac so everything works well in a Terminal Window. On Windows 7 I use CygWin-64 which includes Unix like commands that are good for scripting. The firs tsetp is to hide the username, identity domain, and password in environment variables.

  • export OPASS=password
  • export OUID=username
  • export ODOMAIN=identity_domain

In my case, the identity domain is metcsgse00026. The username is cloud.admin. The password is given to me when I log into the demo.oracle.com system corresponding to this identity domain. What I would type in is

  • export OPASS=password
  • export OUID=cloud.admin
  • export ODOMAIN=metcsgse00026

The first step required is authentication. You need to log into the cloud service using the RESTapi to generate an X-Auth-Token. This is done with a GET command using the “c url” command.

c url -v -X GET -H “X-Storage-User: Storage-$ODOMAIN:$OUID” -H “X-Storage-Pass: $OPASS” https://$ODOMAIN.storage.oraclecloud.com/auth/v1.0

Note the -v is for verbose and displays everything. If you drop the -v you don’t get back the return headers. Passing the -i might be a better option since the -v echos the user password and the -i only replies back with the tokens that you are interested in.

c url -i -X GET -H “X-Storage-User: Storage-$ODOMAIN:$OUID” -H “X-Storage-Pass: $OPASS” https://$ODOMAIN.storage.oraclecloud.com/auth/v1.0

In our example, this returned


HTTP/1.1 200 OK

date: 1458658839620

X-Auth-Token: AUTH_tkf4e26780c9e6b1d171f3dbeafa194cac

X-Storage-Token: AUTH_tkf4e26780c9e6b1d171f3dbeafa194cac

X-Storage-Url: https://storage.us2.oraclecloud.com/v1/Storage-metcsgse00026

Content-Length: 0

Server: Oracle-Storage-Cloud-Service

When you take this output and try to strip the X-Auth-Token from the header you get a strange output and need to add -is to the command to suppress timing of the outputs.

% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 –:–:– –:–:– –:–:– 0

If you add the grep “X-Auth-Token followed by awk ‘{print $2]’ you get back just the AUTH_string which is what we are looking for.


c url -is -X GET -H “X-Storage-User: Storage-metcsgse00026:cloud.admin” -H “X-Storage-Pass: $OPASS” https://metcsgse00026.storage.oraclecloud.com/auth/v1.0 | grep -s “X-Auth-Token” | awk ‘{print $2}’

AUTH_tkf4e26780c9e6b1d171f3dbeafa194cac

accessing oracle cloud storage from command line

Note for the purposes of this blog entry, the world “c url” should be interpreted as one word and not two. Unfortunately, the blog editing software that we have prohibits this work and kicks the blog entry out if it is placed in the blog without the space. Can everyone say a collective “Good Grief” and move on. Unfortunately, you will need to delete the space to make everything work properly.

Now that we have the cost and use out of the way, let’s talk about how to
consume these services. Unfortunately, consuming raw blocks, either tape
or spinning disk, is difficult in the cloud. Amazon offers you an S3
interface and exposes the cloud services as an iSCSi interface through a
downloadable object or via REST api services. Azure offers something
similar with REST api services but offers SMB downloadable objects to
access the cloud storage. Oracle offers REST api services but offers NFS
downloadable objects to access the cloud storage. Let’s look at three
different ways of consuming the Oracle Cloud services.

The first way is to use the rest API. You can consume the services by
accessing the client libraries using Postman from Chrome or RESTClient
from Firefox. You can also access the service from the c url command line.

c url -v -X GET -H “X-Storage-User: Storage-metcsgse00026:cloud.admin”
-H “X-Storage-Pass: $OPASS”
https://metcsgse00026.storage.oraclecloud.com/auth/v1.0

In this example we are connecting to the identity domain metcsgse00026.
The username that we are using is cloud.admin. We store the password in an
environment variable OPASS and pull in the password when we execute the
c url command. On Linux or a Mac, this is done from the pre-installed c url
command. On Windows we had to install cygwin-64 to get the c url command
working. When we execute this c url command we get back and AUTH header
that can be passed in to the cloud service to create and consume storage
services. In our example above we received back X-Auth-Token:
AUTH_tk928cf3e4d59ddaa1c0a02a66e8078008 which is valid for 30 minutes. The
next step would be to create a storage container

c url -v -s -X PUT -H “X-Auth-Token:
AUTH_tk928cf3e4d59ddaa1c0a02a66e8078008”
https://storage.us2.oraclecloud.com/v1/Storage-
metcsgse00026/myFirstContainer

This will create myFirstContainer and allow us to store data either with
more REST api commands or tools like CloudBerry or NFS. More information
about how to use the REST api services can be found in an
online tutorial

The second way of accessing the storage services is through a program tool
that takes file requests on Windows and translates them to REST api
commands on the cloud storage. CloudBerry has an explorer that allows us
to do this. The user interface looks like and
is setup with the File -> Edit or New Accounts menu item. You need to fill
out the access to look like . Note
that the username is a combination of the identity domain (metcsgse00026)
and the username (cloud.admin). We could do something similar with PostMan
or RESTClient extensions to browsers. Internet Explorer does not have plug
ins that allow for REST api calls.

The third, and final way to access the storage services is through NFS.
Unfortunately, Windows does not offer NFS client software on desktop
machines so it is a little difficult to show this as a consumable service.
Mac and Linux offer these services as mounting an nfs server as a network
mount. Oracle currently does not offer SMB file shares to their cloud
services but it is on the roadmap in the future. We will not dive deep
into the Oracle Storage Cloud Appliance in this blog because it gets a
little complex with setting up a VM and installing the appliance software.
The documentation for this serviceM is a good place to
start.

In summary, there are a variety of ways to consume storage services from
Oracle. They are typically program interfaces and not file interfaces. The
service is cost advantageous when compared to purchasing spinning disks
from companies like Oracle, NetApp, or EMC. Using the storage appliance
gets rid of the latency issues that you typically face and difficulty in
accessing data from a user perspective. Overall, this service provides
higher reliability than on-premise storage, lower cost, and less
administration overhead.

accessing cloud storage

Oracle cloud storage is not the first product that performs basic block storage in the cloud. The name is a little confusing as well. When you think of cloud storage, the first thing that you think of is dropbox, box.com, google docs, or some other file storage service. Oracle Cloud Storage is a different kind of storage. This storage is more like Amazon S3 storage and less like file storage in that it provides the storage foundation for other services like compute, backup, or database. If you are looking for file storage you need to look Document Cloud Storage Services which is more tied to processes and less tied to raw cloud storage.
In this blog we will look at different ways of attaching to block storage in the cloud and look at the different ways of creating and consuming services. To start off with, there are two ways to consume storage in the Oracle Cloud, metered and un-metered. Metered is charged on a per-hourly/monthly basis and you pay for what you consume. If you plan on starting with 1 TB and growing to 120 TB over a 12 month period, you will pay on average for 60 TB over the year. If you consume this same service as an un-metered service you will pay for 120 TB of storage for 12 months since you eventually cross the 1 TB boundary some time over the year. With the metered services you also pay for the data that you pull back across the internet to your computer or data center but not the initial load of data to the Oracle Cloud. This differs from Amazon and other cloud services that charge both for upload and download of data. If you consume the resources in the Oracle Cloud by other cloud services like compute or database in the same data center, there is no charge for reading the data from the cloud storage. For example, if I use a backup software package to copy operating system or database backups to the Oracle Cloud Storage and restore these services into compute servers in the Oracle Cloud, there is no charge for restoring the data to the compute or database servers.

To calculate the cost of cloud storage from Oracle, look at the pricing information on the cloud web page. for metered pricing and for un-metered pricing.

If we do a quick calculation of the pricing for our example previously where we start with 1 TB and grow to 120 TB over a year we can see the price difference between the two solutions but also note how much reading back will eventually cost. This is something that Amazon hides when you purchase their services because you get charged for the upload and the download. for un-metered pricing and for metered pricing.
Looking at this example we see that 120 TB of storage
will cost us $43K per year with un-metered services but $36K per year for
metered services assuming a 20% reading of the data once it is uploaded.
If the read back number doubles, so does the cost and the price jumps to
$50K. If we compare this cost to a $3K-$4K/TB cost of on-site storage, we
are looking at $360K-$480K plus $40K-$50K in annual maintenance. It turns
out it is significantly cheaper to grow storage into the cloud rather than
purchasing a rack of disks and running them in your own data center.

The second way to consume storage cloud services is by using tape in the
cloud rather than spinning disk in the cloud. Spinning disk on average
costs $30/TB/month whereas tape averages $1/TB/month. Tape is not offered
in an un-metered service so you do need to look at how much you read back
because there is a charge of $5/TB to read the data back. This compares to
$7/TB/month with Amazon plus the $5/TB upload and download charges.

More ITIL notes

Configuration management fits between change management and release management. It ensures that assets of the IT group are recorded, change is done with minimal risk, and data integration is maintained.


To account for all IT assets and configuration within the organization and its services. To provide accurate information on configuraiton sand their documentation to support all other service management processes. To provide a sound basis for incident management, problem management, change management, and release management. To verify configurationr ecords against the infrastructure and correct any exceptions.


Service level management and SLAs are the customer facing part of the IT department. The configuration management organizes this data and allows us to organize and present this data. This correlates to asset management and relationship management with the user community. Some companies keep asset management in spreadsheets that are used for taxes and depreciation tracking but not for asset tracking. It typically is an afterthought or something that is done once a year. This is a bad practice that is typically driven by the accounting department. It should be driven by the IT department on a daily basis, not yearly.


The configuration information should be stored in the configuration management database. It should include hardware, software, peopleware, and documentation. It should also include services and the relationships between configuration items as well as incident, problems, and known errors. It should also include a history of all changes and releases. Historically this has been kept in a journal or notebook and done in different formats based on the note keeping ability of the administrators. This format and repository needs to be standardized and centralized. Deployment of an asset management package typically does not include hooks into the help desk, request for change requests, and release announcements. It does, however, typically resolve accounting issues that upper level management has and gets funded easier than a configuration management system.


The configuration management system should have linkages into the definitive software library (typically CVS or Subversion) and the definitive hardware store (typically the enterprise management interface in the IT organization).


The first of configuration management process is planning. This should include strategy, policy, scope and objectives. It should also include processes, procedures, guidelines, and responsibilities. It also includes relationships with other ITIL processes and relationships with other parties as well as tool and resource requirements. The objectives should be simple, measurable, achievable, realistic, and timely. Many of these categories should be boilerplate items because processes, procedures, and guidelines should not change very much between projects.


The second part of configuration management is identification. It is important to define what level of detail is needed to identity an item. The level of detail is what differentiates companies from each other. Higher levels of detail requires more time and typically more cost. Less detail typically leads to differentiation of services and non-uniformity between systems. In defining relationships it typically helps to define the composition, connection, and the usage relationships. For example a workstation typically is composed of a keyboard, processor unit and monitor. It is typically connected to a network hub, network server, file server, and print server. It is typically used by a user but could be time shared at night for batch processing of jobs.


The third part of configuration management is control. The subcomponents of this is register, update, archive, and protection. When we receive new equipment, we need to register this equipment. If this equipment is changed it should be updated. For example, if we get a computer and install new memory in the system we need to register the computer and update the definition when we add memory to it. Archiving is backup of the CMDB to a backup repository and might or might not involve pruning of data as the backup is done. Protection of the data keeps changes being made to the repository without proper authorization. It also keeps the data from being stolen or corrupted.


The fourth configuration management activity is status accounting. This is reporting of all current and historical data concerned with each configuration identifier throughout its lifecycle. It helps create configuration baselines as well as analyze risk and cost that changes create in an organization.


The fifth and final element is verification. This is to verify that the data in the CMDB and make sure that it is accurate. This is typically done on a regular basis and not done once a year as is required by accounting. It is important to make sure that verification is done before equipment is moved or new software releases are made. It is also important to verify configurations after disasters since changes can be made in times of emergency and not recorded.


Change management is the next major component of ITIL. Change is the process of moving from one defined state to another. It us used to ensure that standardized methods and procesures are used for efficient and prompt handling of all changes, in order to minimize the impact of any related incidents upon service. It is responsible for implementing changes in the orgznization with the minimum of disruption. This also allows for announcements of what changes will be made, when the changes will happen, and a definition of when and where a backout plan should be implemented when problems occur. This helps maintain a good balance between the benefits of change as well as the risks associated with change. Layered on top of this is an approval process that tracks and manages change requests. It typically involves some type of review, a go-no go selection, and resource commitment before a change is begun. It typically is integrated with capacity management, availability management, and configuration management. When a change is proposed to a system it is moved from configuration management into change management. When the project is approved it is moved into the release management process so that it can be rolled out as a new configuration. The capacity and availability management systems are also integrated into change management so that existing operation parameters can be analyzed to figure out if the changes will positively or negatively impace the service.


A typical trigger that initiates change management is a request for change that comes from the service desk, problem management, or changed CIs made by engineering. New business requirements can also generate change requests. Legislation and corporate changes typically mandate change requests as well.


Changes are typically categorized as urgent, minor, significant, and major. Urgent requests are typically handled by an emergency group that handles change requests quickly. Minor changes are typically approved by a larger group and done through collaboration tools like email or shared files. Significant change requests typically require discussion and reviews of the changes. Major changes usually requires higher levels of management to get involved because it has a larger impact on the organization and typically requires more resources and assets to be involved.


Typical metrics for change management are number of changes, number of changes backed out and why, cost per change vs estimaged cost, and number of urgent changes. Other items that need to be measured are time from RFC to release, number of items reviewed by review board and items handled by change manager.


Release management is the final component of support services. This is defined as a way of taking a holistic view of a change to an IT service and ensure all aspects of a release, both technical and non technical, are considered together. It includes software, hardware, and documentation required.


Release management typically incorporates processes in the development, test, and production environments. Release management typically manages the definitive software library and definitive hardware hardware store. It is also important that the release manager be integrated with compliance checks as well as license agreements.