The Third Phase of Big Data

Author: 
Coverage Type: 

[Commentary] The demand for higher-capacity, higher-performing systems is driving us toward the third phase of the Big Data revolution. In the first phase, we saw the advent of software technologies like Hadoop and NoSQL for handling extremely large amounts of data. (This phase, of course, is by no means over.) The second phase began with the proliferation of reliable, economical sensors and other devices for harvesting real-world data. Software apps for mining video streams, images, handwriting forms and other “dark” data belong in this category too: Without them, the data, from a practical perspective, wouldn’t exist. The third phase will focus on infrastructure. Simply put, we need new hardware, software, networking and data centers designed to manage the staggeringly amounts of data being generated and analyzed by the first two innovations. Hyperscale data centers, software-defined networking and new storage technologies represent the first steps in what will be a tremendous cycle of innovation.

Big Data is really one of the magical concepts of our era. Its ability to give us greater insight and understanding of the world around us increases our ability to create a better society. But it is also going to require a tremendous amount of effort behind the scenes to create solutions that can help wield these Big Data stores in a compact, cost-effective, reliable and environmentally conscious way.

[Sumit Sadana is executive vice president and chief strategy officer at Sandisk]


The Third Phase of Big Data