database alternatives

One of the key questions that I get asked on a regular basis is to justify the cost of some product. Why not use freeware? Why not put things together and use free stuff? When I worked at Texas A&M and Rice University we first looked at public domain software. We heavily used the Apache web server, Tomcat, MySQL, Postgress, Linux, and BSD. These applications worked up to a point. Yes, you can spin up one Apache web server on one server. Yes, you can have one Apache web server listen on multiple IP addresses and host multiple web servers. The issue typically is not how many web servers can you handle but how many clients can you answer. Easily 90% of the web servers could handle the load that it saw on a regular basis. We spent 80% of our time on the 10% that could not handle the load. Not all of the web servers could handle the functionality. For example, a student registration system needs to keep a shopping cart of classes selected and you need to level up to an Apache Tomcat server to persistently keep this data and database connections live. If you use a web server you need to store all transactions in the database, all of the classes selected, and all of the fees associated with the class. Every interaction with the web server causes multiple connections with the database server. Doing this drives the number of processors needed by the database thus driving up the cost of the hardware and software license.

If we use an application server that can handle caching of data, we can keep a list of available classes on the application server and not only have to go back to the database server for transactions. When a student selects a class, it takes it out of inventory and puts it in their class schedule for the next year. The same is true for on-line shopping, purchasing tickets to a play or airline, drafting for a fantasy football team. Years ago ESPN ran a March Madness contest on-line. They presented your selections with an Apache web server and every team selection required an interaction with their database on the back end. The system operated miserably and it took hours to select all rounds to fill out your bracket. They updated the server with Javascript and a Tomcat server and allowed you to fill out all of round one in your browser. Once you finished the first round you submitted your selections and were presented with a round two based on your first round selections. They later put this on WebLogic and put all of the round selections in Java code on the WebLogic server. The single interaction with the database became submission of your complete bracket. They went from thousands of interactions with a database to a single interaction per submission.

We can have similar architecture discussions at the database layer as well. If I am looking at a simple table lookup, why pay for a robust database like Oracle 12c? Why not use something like Azure Table Storage Services and do a simple select statement from a file store. Why not put this in a free version of Oracle in Apex on the web and define a REST api to pull the data based on a simple or potentially more complex select statement. Again, 90% of the problems can be solved with simple solutions. Simple table lookups like translating a simple part name to a price can be done with Excel, MySQL, APEX, JSON processing, or REST apis. The difficulty comes up with the remaining 10%. How do I correlate multiple tables together to figure out the price of an item based on cost of inventory, cost of shipping, electrical costs, compensation costs for contractors and sales people, and other factors that determine profitability and pricing. How do I do a shortest routing algorithm for a trucking system based on traffic, customer orders, inventory in a warehouse, the size of a truck, and the salary of the driver and loading dock personnel. For things like this you need a more complex database that can handle multiple table joins, spatial data, and pulling in road conditions and traffic patterns from external sources. Products like IBM DB2, Oracle Database, and Microsoft SQL Server can address some of these issues.

We also need to look at recovery and restoration time. When a Postgress server crashes, how long does it take to recover the database and get it back online? Can I fail over to a secondary parallel server because downtime is lost revenue or lost sales. If you go to HomeDepot to order plumbing parts and their site goes down, how long does it take to go to the Ace or Lowes web site and order the same part and have it delivered by the same delivery truck to your home or office? Keeping inventory, order entry, and web services up becomes more than just answering a query. It becomes a mission critical service that can not go down for more than a few seconds. Services like Data Guard, Golden Gate, and Real Application Clustering are required to keep services up and active. MySQL, MongoDB, Amazon Aurora, and other new entry level database technologies can handle simple requests but take minutes/hours to recover information for a database. Failing over through storage to another site is typically not an answer in this case. It takes minutes/hours to recover and restart a moderate database of 20 TB or larger. First the data replication needs to finish then the database needs to be booted at a secondary site and it needs to maintain consistency in the data as it comes back up. The application server then needs to connect to the new service and recommit requests that came in during and since the system failure. As this is happening, customers are opening a new browser tab and going to your competition to find the same part on another site.

In summary, it takes more than just getting a bigger and faster application server or database. Moving the services to the cloud isn’t necessarily the answer. You need to make sure that you move the two components together the majority of the time. Look at your application and ask where do you spend more of your time? It is tuning sql statements? Is it writing new queries to answer business questions? Is it optimizing your disk layout to get tables to the database faster? Take a step back and ask why is the database pounding the disk so hard. Can I cache this data in the database by adding a little more memory to the disk controller or database server? Can I cache the data at the application server by adding more memory there and keep from asking the database for the same information over and over again? In the next few days we are going to look at database options and database monitoring. We are going to look at some of these tools and refer back to the bigger picture. Yes, we can tune the storage to deliver all of the bits at the highest rate possible. Our question will not be how to do this but should we be doing this. Would something like an Exadata or an in-memory option allow us to transfer less data across the storage network and get us answers faster? Would adding memory somewhere allow us to buffer more data and reduce the database requests which reduces the amount of data needed from the disk.