Modern in-memory databases like SAP HANA, Pivotal Gemfire, MongoDB and others generate different workloads against the back end data and log files and have different requirements for performance and availability over traditional relational databases. This is because all processing occurs in the memory based data services layer on the server and storage resources are only used for data persistence and cluster-wide high availability (HA). Like traditional databases however, there are two distinct workloads: one for the datafiles and one for the logs.
Datafiles are basically content repositories and as a result focus cost considerations on driving down $/GB stored. Though steady state performance of these storage volumes is not paramount due to the fact that all real time processing is being done in memory, where performance is needed is in the event that a database needs to be reloaded into memory after a failure or an outage occurs.
Log files on the other hand need to have low latency and high IOPS at all times, because log commit occurs in near real-time. For this reason $/IOPS becomes the critical characteristic for these devices. Also, as with the data volume, SAP certification and full stack integration is critical. For these reasons, the scale-up/down architecture is the correct architecture choice for this workload.
In some cases however, the most important characteristic is the existence of fully certified solutions that have been validated end to end and are on a published support lists. SAP solutions are a good example of this.
The Spidercharts below shows the distribution and weighting of the primary workload requirements for this use case.