The High Performance OLTP use case is built on the production database layer and its many replicas that support a broad range of front-end high performance business applications. The database layers can be broken down into two categories. First, you have the databases that support the production applications themselves. Second, there may be tens to hundreds of copies of those production databases that support a wide range of backup, development, testing, staging, reporting, analytics and other use cases. Need for both production as well as ease of replica creation are driving customers to consolidate database and data warehouse workloads on the same storage array.

Business's applications such as SAP's and the processes that they automate rely on the production databases for transactional record storage. Consistent, predictable, and reliable sub-millisecond performance that exceeds business demands is a critical expectation for every application landscape in today’s internet-scale world. The ability to deliver real-time business operations is both a critical and a disruptive advantage for businesses. Enabling new, innovative, and rapid development, test, and deployment methodologies within application landscapes is incredibly difficult—and painful—to achieve at a reasonable TCO investment level.

For this reason, ultra-low latency as well as IO throughput are absolutely critical. For this reason flash storage and more importantly a platform that seeks to exploit flash technologies many advantages over spinning disk is critical. Speed and throughput aside, these OLTP systems oft are mission critical components of any business. For this reason, Quality of Service (QoS) and Reliability, Availability and Serviceability (RAS) are also critical. Increasingly integrating the application and database layers into the overall IT stack is gaining momentum, so efforts to tightly couple the DB software layers with the storage hardware is helping increase overall efficiency and business agility. For all of these reasons, the tightly coupled scale–out architecture is the correct architecture choice for this workload.

In the case of Oracle, an important business related concept is that Oracle can be, in many cases, far and away the most expensive IT investment. For some customers, Oracle alone can represent 30-50% of their entire IT technology budget, which makes it their most valuable workload. In fact, in a typical deployment, Oracle licensing and maintenance alone can represent 50-70% of the cost of the solution while storage typically only represents 6-8% of the total costs. Many companies will spend the extra 1-2% premium on the storage as an insurance policy to ensure availability of the entire system.

The test/dev use case however drives a different set of workload priorities. Though performance is still important, two key factors heavily overshadow it. First, because there is a need to rapidly standup and decommission large numbers of copies, deep application integration for those operations is critical. Second, because this can represent 10-100x more capacity that the production databases, the $/GB for effective (logical) capacity economics of these workloads, as well as the ability to limit very expensive compute licensing cost by allowing each DB server get more from higher performing storage, become critical. Space-efficient writable snapshots and clones, de-duplication and compression are all key factors in platform choice for test/dev.

The Spidercharts below shows the distribution and weighting of the primary workload requirements for this use case.