The term Internet of Things has been around a relatively short time, but the idea behind it goes back at least as far as 1990, when the Interop technology expo showcased the first network-connected toaster, which could be turned on and off over the Internet.
Its probably fair to say no one imagined then that the Internet of Things (IoT) would emerge years later as a vast collection of Internet-connected devices and applications that gather and exchange massive amounts of datadata that can be tracked, analyzed, and acted upon to the benefit of business and society in general.
IoT-generated data offers tremendous beneficial potential in areas including real-time medical monitoring for better health, real-time financial transaction analysis for more secure banking, smart home devices like thermostats or fans, and mobile devices and wearables. But the sheer scope, in terms of both the number of connected devices and the amount of data, also presents an unprecedented challenge as far as how all that data will be ingested, stored, analyzed and managed.
The impact on storage in particular is obvious. Theres simply more dataa lot more datato collect, analyze, and archive. Traditional legacy storage systems, designed for monolithic data centers and moderate amounts of data, cannot manage the volume and geo-dispersion of IoT data. More importantly, the value of the data is only as much as the capability to analyze itrequiring that analytics applications be efficient, scalable, and able to access all of the data.
What you need is an economically sensible, scalable storage alternative.
Heres the problem: Traditional storage simply can’t scale to support the vast amounts of IoT-generated data. An object-based geo-distributed data lake is the solution; it has the flexibility, scalability, compliance, and sophisticated architecture to support data on an IoT scale.
The benefits of scale-out, cloud-based storage include:
- Treatment of unstructured data as objects. Objects are limitless in size, and there is no limit to the number of files in an object file system. The objects can be geo-distributed and geo-replicated.
- Reduced time-to-insight for analytics applications. Traditional storage often forces data to be copied from disparate operational systems onto a cluster dedicated to analytics. A cloud-based data lake makes in-place analytics possible, eliminating wasteful data duplication.
- Support for modern applications. Traditional storage was never architected for new Web, mobile, and cloud applications. A cloud-scale storage architecture is specifically designed to support modern applications with availability, protection, simplicity and scale.
- Geo-replication and global data protection. Replicating data across geographies provides full protection in the event of a total site failure.
Mandate 1: Dont let hardware hold you back.
The cost to store, manage and analyze billions or trillions of data objects just isnt feasible using traditional storage, given the cost of constantly having to make new hardware investments to keep up with growing data stores. By decoupling software from hardware, Software Defined Storage makes it possible to simply acquire low-cost commodity hardware insteadreducing the cost of acquiring storage by as much as 50%.
Mandate 2: Get storage that can handle geo-distributed data.
To do data analysis in a traditional storage environment, you have to make multiple copies of data or coalesce it from multiple sources. Thats inherently slow, inefficient and unsuited to the amount and variety of IoT-generated data. But software defined cloud storage introduces the concept of a geo-distributed data lake, with the protocolsin place to accept a variety of data on the front end and to perform analytics-in-place wherever edge-device data is collected for rapideven real-timedata analysis. It also enables analytics at the core on archival data for visibility into ever-increasing data sets.
Mandate 3. Failure is not an option.
Data protection and integrity is paramount. Geo-replication and global data protection optimizes storage efficiency while maintaining fast, reliable global access to data, providing full protection in the event of a total site failure. Applications seamlessly maintain functionality and the system continues to deliver full read and write access from any global location.
Mandate 4: Standards can change, and you wont miss a beat.
Traditional storage wasnt designed to connect the many diverse data components that make up the IoTnot just the data itself, but also the systems required to collect, analyze, and archive it, and the applications needed to connect with it. software defined object storage can address this challenge with open APIs and other infrastructure elements that enable all the components of the data ecosystem to work together smoothly. For example, integrated support for the Hadoop File System (HDFS) allows Hadoop analytics in place on the geo-distributed data lake.
The world of IoT introduces a whole new set of requirements for ingesting, storing, analyzing and managing data that traditional data storage simply cant meet. Software Defined Storage transcends every traditional limitation, making it not only possible but practical to realize the full potential of data that the IoT creates.
Learn more about EMCs Software Defined Storage Solutions.