For the last four days we have been focusing on Database as a Service in the cloud. We focused on Application Express, or Schema as a Service, in the last three days and looked at pricing and how to get APEX working in the Oracle Public Cloud, Amazon AWS, and Microsoft Azure. With the Oracle Public Cloud we have three options for database in the cloud at the platform as a service layer; Schema as a Service, Database as a Service, and Exadata as a Service. We could run this in compute as a service but have already discussed the benefits of offloading some of the database administration work with platform as a service (backup, patching, restarting services, etc).
The question that we have not adequately addressed is how you choose between the three services offered by Oracle. We touched on one of the key questions, database size, when we talked about Schema as a Service. You can have a free database in the cloud if your database is smaller than 25 MB. It will cost you a little money, $175/month, if you have a database smaller than 5 GB. You can grow this to 50 GB and stay with the Schema as a Service. If your database is larger than 50 GB you need to look at Database as a Service or Exadata as a Service. You also need to look at these alternatives if you are running an application in a Java container and need to attach to the database through the standard port 1521 since Schema as a Service only supports http(s) connection to the database. If you can query the database with a REST api call, Schema as a Service is an option but is not necessarily tuned for performance. Products like WebLogic or Tomcat or other Java containers can buffer select statements in cache and not have to ask the same question over and over again from the database. For example, if we census data and are interested in the number of people who live in Texas, we get back roughly 27 million rows of data from the query. If we want to drill down and look at how many people live in San Antonio, we get back 1.5 million rows. If our Java code were smart enough and our application server had enough buffer space, we would not need to read the 27 million rows back when we want to just look at the 1.5 million rows relating to San Antonio. The database can keep the data in memory as well and does not need to read the data back from disk to make the select statement to find the state or city rows that match the query.
Let’s take a step back and talk about how a database works. We create a table and put information in columns like first name, last name, street address, city, state, zip code, email address, and phone number. This allows us to contact each person either through snail mail, email, or phone. If we allocate 32 bytes for each field we have 8 fields and each row takes up 256 bytes to identify each person. If we store data for each person who lives in Texas we consume 27 million rows. Each row takes up 256 bytes. The whole table will fit into 6.9 GB of storage. This data is stored in a table extent or file that we save into the /u02/data directory. If we expand our database to store information about everyone who lives in the United States we need 319 million rows. This will expand our database to 81.7 GB. Note that we have crossed the boundary for Schema as a Service. We can’t store this much information in a single table so we have to look at Database as a Service or Exadata as a Service. Yes, we can optimize our database by using less than 32 bytes per column. We can store zip codes in 16 bytes. We can store phone numbers in 16 bytes. We can store state information in two bytes. We can also use compression in the database and not store the characters “San Antonio” in a 32 byte field but store it in an alternate table once and correlate it to the hexadecimal number 9c. We then store 9c into the state field which tells us that the city name is stored in another table. This saves us 1.5 million times 31 bytes (one to store the 9c) or 46 MB of storage. If we can do this for everyone in Texas shrink the storage by 840 MB. This is roughly 13% of what we had allocated for all of the information related to people who live in Texas. If we can do this for the city, state, and zip code fields we can reduce the storage required by 39% or shrink the 81.7 GB to 49.8 GB. This is basically what is done with a technology called Hybrid Columnar Compression (HCC). You create a secondary table that correlates the 9c value to the character string “San Antonio”. You only need to store the character string once and the city information shrinks from 32 bytes to 1 byte. When you read back the city name, the database or storage that does the compression returns the string to the application server or application.
When you do a select statement the database looks for the columns that you are asking for in the table that you are doing a select from and returns all of the data that matches the where clause. In our example we might use
select * from census where state = 'Texas'; select * from census where city = 'San Antonio';
We can restrict what we get back by not using the “*” value. We can get just the first_name and last_name and phone number if that is all we are interested in. The select statement for San Antonio will return 1.5 million rows times 8 columns times 32 bytes or 384 MB of data. A good application server will cache this 384 MB of data and if we issue the same select statement again in a few seconds or minutes we do not need to ask the database again. We issue a simple request to the database asking it if anything has changes since the last query. If we are running on a slow internet connection as we find in our homes we are typically running at 3 MB/second download speeds. To transfer all of this data will take us 128 seconds or about two minutes. Not reading the data a second time save us two minutes.
The way that the database finds which 384 MB to return to the application is done similarly. It looks at all of the 81.7 GBs that store the census data and compares the state name to ‘Texas’ or hex value of corresponding to the state name. If the compare is the same, that row is put into a response buffer and transmitted to the application server. If someone comes back a few seconds later and requests the information correlating to the city name ‘San Antonio’, the 81.7 GB is read from disk again and and the 384 MB is pulled out to return to the application server. A smart database will cache the Texas data and recognize that San Antonio is a subset of Texas and not read the 81.7 GB a second time but pull the data from memory rather than disk. This can easily be done by partitioning the data in the database and storing the Texas data in one file or disk location and storing the data correlating to California in another file or disk location. Rather than reading back 81.7 GB to find Texas data we only need to read back 6.9 GB since it has been split out in storage. For a typical SCSI disk attached to a computer, we read data back at 2.5 GB/second. To read back all of the US data it takes us 33 seconds. It we read back all of the Texas data it takes us 2.76 seconds. We basically save 30 seconds by partitioning our data. If we read the Texas data first and the San Antonio data second with our select statements, we can cache the 6.9 GB in memory and not have to perform a second read from disk saving us yet another 33 seconds (or 3 seconds with partitioned data). If we know that we will be asking for San Antonio data on a regular basis we setup an index or materialized view in the database so that we don’t have to sort through the 6.9 GB of data but access the 384 MB directly but read just the relevant 384 MB of data the first time and reduce our disk access times to 0.15 seconds. It is important to note that we have done two simple things that reduced our access time from 33 seconds to 0.15 seconds. We first partitioned the data and the way that we store it by splitting the data by state in the file system. We second created an index that helped us access the San Antonio data in the file associated with Texas without having to sort through all of the data. We effectively pre-sort the data and provide the database with an index. The cost of this is that every insert command to add a new person to San Antonio requires not only updating the Texas table but updating the index associated with San Antonio as well. When we do an insert of any data we must check to see if the data goes into the Texas table and update the index at the same time whether the information correlates to San Antonio or not because the index might change if data is inserted or updated in the middle of the file associated with the Texas information.
Our original question was how do we choose between Schema as a Service, Database as a Service, and Exadata as a Service. The first metric that we used was table size. If our data is greater than 25 MB, we can’t use the free APEX service. If our data is greater than 50 GB, we can’t use the paid APEX or Schema as a Service. If we want to use features like compression or partitioning, we can’t use the Schema as a Service either unless we have sys access to the database. We can create indexes for our data to speed requests but might or might not be able to setup compression or partitioning since these are typically features associated with the Enterprise Edition of the database. If we look at the storage limitations of the Database as a Service we can currently store 4.8 TB worth of data in the database. If we have more data than that we need to go to Exadata as Service. The Exadata service comes in different flavors as well and allows you to store up to 42 TB with a quarter rack, 84 TB with a half rack, and 168 TB with a full rack. If you have a database larger than 168 TB, there are no solutions in the cloud that can store your data attached to an active database. You can backup your data to cloud storage but you can not have an active database attached to it.
If we look a little deeper into the Exadata there are multiple advantages to going with Exadata as a Service. The first and most obvious is that you are suddenly working on dedicated hardware. In most cloud environments you share processors with other users as well as storage. You do not get a dedicated bandwidth from processor to disk but must time share this with other users. If you provision a 16 core system, it will typically consume half of a 32 core system that has two sockets. This means that you get a full socket but have to share the memory and disk bandwidth with the next person running in the same server. The data read from the disk is cached in the disk controller’s cache and your reads are optimized until someone else reads data from the same controller and your cached data gets flushed to make room. Most cloud vendors go with commodity hardware for compute and storage so they are not optimized for database but for general purpose compute. With an Exadata as a Service you get hardware optimized for database and you get all of the processors in the quarter, half, or full rack. There is no competing for memory bandwidth or storage bandwidth. You are electrically isolated from someone in the other quarter or half rack through the Infiniband switch. Your data is isolated on spindles of your own. You get the full 40 GB/second to and from the disk. Reading the 81.7 GB takes 2.05 seconds compared to 32.68 seconds through a standard SCSI disk controller. The data is partitioned and stored automatically so that when we ask for the San Antonio data, we only read back the 384 MB and don’t need to read back all of the data or deal with the index update delays when we write the data. The read scans all 81.7 GB and returns the results in 0.01 seconds. We effectively reduce the 33 seconds it took us previously and dropped it to 10 ms.
If you want to learn more about Exadata and how and why it makes queries run faster, I would recommend the following books
- Expert Oracle Exadata
- Oracle Exadata Survival Guide
- Oracle Exadata Recipes: A Problem-Solution Approach
- Achieving Extreme Performance with Oracle Exadata
- Oracle Exadata Expert’s Handbook
- Expert Oracle Exadata
or the following youtube video channels
or the following web sites
The Exadata as a Service is a unique offering in the cloud. Amazon and Microsoft have nothing that compares to it. Neither company offers dedicated compute that is specifically designed to run a database in the cloud with dedicated disk and dedicated I/O channels. Oracle offers this service to users of the Enterprise Edition of the database that allows them to replicate their on-premise data to the cloud, ingest the data into an Exadata in the cloud, and operate on the data and processes unchanged and unmodified in the cloud. You could take your financial data that runs on a 8 or 16 core system in your data center and replicate it to an Exadata in the cloud. Once you have the data there you can crunch on the data with long running queries that would take hours on your in house system. We worked with a telecommunications company years ago that was using an on-premise transportation management system and generated an inventory load list to put parts on their service trucks, work orders for the maintenance repair staff, and a driving list to route the drivers on the optimum path to cover the largest number of customers in a day. The on-premise system took 15-16 hours to generate all of this workload and was prone to errors and outages requiring the drivers to delay their routes or parts in inventory to be shipped overnight for loading in the morning onto the trucks. Running this load on an Exadata dropped the analytics to less than an hour. This allowed trucks to be rerouted mid-day to higher profit customers to handle high priority outages as well as next day delivery of inventory between warehouses rather than rush orders. Reducing the analytics from 15 hours to less than an hour allowed an expansion of services as well as higher quality of services to their customer base.
Not all companies have daily issues like this and look for higher level processing once a quarter or once or twice a year. Opening new retail outlets, calculating taxes due, or provisioning new services that were purchased as Christmas presents are three examples of predictable, periodic instances where consuming a larger footprint in the cloud rather than investing in resources that sits idle most of the year in your data center. Having the ability to lease these services on an monthly or annual basis allows for better utilization of resources not only in your data center but reduces the overall spend of the IT department and expanding the capabilities of business units to do things that they normally could not afford.
Exadata as a Service is offered in a non-metered configuration at $40K per month for a quarter rack (16 cores and 144 TB of disk), $140K per month for a half rack (56 cores and 288 TB of disk), or $280K per month for a full rack (112 cores and 576 TB of disk). The same service is offered on a metered basis for $80K for a quarter rack, $280K for a half rack, and $560K for a full rack (in the same configuration as the non-metered service). One of the things that we recommend is that you analyze the cost of this service. Is it cheaper to effectively lease a quarter rack at $80K for a month and get the results that you want, effectively lease a quarter rack at $480K for a year, or purchase the hardware, database license, RAC licenses, storage cell licenses, and other optional components to run this in your data center. We will not dive into this analysis because it truly varies based on use cases, value to your company for the use case, and cost of running one of these services in your data center. It is important to do this analysis to figure out which consumption model works for you.
In summary, Exadata as a Service is a unique service that no other cloud vendor offers. Having dedicated hardware to run your database is unique for cloud services. Having hardware that is optimized for long, complex queries is unique as well. Exadata is one of the most popular hardware solutions offered by Oracle and having it available on a monthly or annual basis allows customers to use the services at a much lower cost than purchasing a box or larger box for their data center. Having Oracle manage and run the service frees up your company to focus on the business impact of the hardware and accelerated database rather than spend month to administer the server and database. Tomorrow we will dive into Database as a Service and see how a generic database in the cloud has a variety of use cases and different cost entry points as well as features and functions.