Big Data and the Fourth Industrial Revolution

Java Printing

The potential of Big Data is slowly emerging on us, as analytical methods become increasingly more powerful in making the huge amounts of data we collect in todays society meaningful for understanding reality and to make predictions.

We have already covered the development in several blog posts, including Big Data is Better Data, The Challenge with Big Data and Christensen, Asimov and the Impact of Disruptive Innovation.

By now it seems Big Data, combined with the internet of things and the possibility we have to build complex connected system, like Ericsson´s networked society concept, together push us into a new disruptive age, a fourth industrial revolution.

It is not long ago we realised we have been through the third industrial revolution. In fact, I wrote a blog post about this just about two and a half years ago.

The first industrial revolution was the transition to new manufacturing processes in the period from about 1760 to 1840. The second industrial revolution, starting around 1870, transformed western societies into an urban-centred, industrial-based culture, which was an entirely new social reality based on science and technology. The third industrial revolution, starting in the 1960s, brought us computers, electronics and the internet.

It can be argued that big data is a continuation on the third industrial revolution, however I start to think that what is happening now is something different. This new phase is truly breaking down the barriers between man and machine.

Data is increasingly building up on who we are, who we know, where we are, where we have been and where we plan to go. Mining and analysing this data lets us understand and predict how people behave at the individual, group and global level, as I wrote about in Christensen, Asimov and the Impact of Disruptive Innovation.

The difference between the age of Business Intelligence and the new age of Big Data is profound.

41xb0KRtEwL._SX331_BO1,204,203,200_When I was in business school in the 1980s, one of the initial year courses was statistical analysis, where we studied how small data and samples can be used to predict developments. Our course book was Chou “Statistical Analysis For Business and Economics”, and it was a very interesting read. I learned that through sampling, simulations and not the least Bayesian inference,  we had powerful tools for dynamic analysis of a sequence of data, with application in a wide range of activities, including science, engineering, philosophy, medicine, sport, and law. Through sampling and statistical analysis, we could develop Business Intelligence.

Common to all the analysis methods in the statistics course at Stockholm School of Economics, was that the data population is assumed to be larger than the observed data set. In other words, the observed data is assumed to be sampled from a larger population and conclusions about the observed data could through probability theory be assumed as valid for the full data population, given that we had at least 30 data points, which enabled an assumption of normal distribution.

In Comparison, Big Data is about data sets so large and complex that traditional data processing applications are inadequate. For big data, we can make extremely accurate predictive analytics. The accuracy leads to more confident decision making and understanding of the world, and better decisions can mean greater operational efficiency, cost reduction and reduced risk.

This will have a profound influence on society, at a level that can help us bridge a number of current challenges like climate change, overpopulation and unemployment on a macro level, to advanced developments in life sciences, chemistry and advanced materials on a micro level.

As the impact looks to be so profound, I think the age of Big Data can well be seen as a new, fourth, industrial revolution, building upon the foundation of the third, just as the second industrial revolution built on the advancements of the first.

Some years ago, Gartner Group wrote  “Big Data represents the Information assets characterized by such a High Volume, Velocity and Variety to require specific Technology and Analytical Methods for its transformation into Value“, and they defined Big Data in the following way:

  • Volume: big data doesn’t sample. It just observes and tracks what happens
  • Velocity: big data is often available in real-time
  • Variety: big data draws from text, images, audio, video; plus it completes missing pieces through data fusion
  • Machine Learning: big data often doesn’t ask why and simply detects patterns
  • Digital footprint: big data is often a cost-free by-product of digital interaction

imageWhereas Business Intelligence uses descriptive statistics with data with high information density to measure things, detect trends etc.,  Big Data uses inductive statistics and concepts from nonlinear system identification to infer laws (regressions, nonlinear relationships, and causal effects) from large sets of data with low information density to reveal relationships, dependencies and perform predictions of outcomes, patterns and behaviours.

At its core, Big Data uses have unprecedented complexity, speed and global reach, and as digital communications become commonplace, data will rule in a world where nearly everyone and everything is connected in real time.

Big Data requires exceptional technologies to efficiently process large quantities of data in real-time or within tolerable elapsed times. Such technologies were created by the third industrial revolution and now enables the new networked society, transcending man and machine. The impact will be unprecedented in the history of human society.

Leave a comment

Your email address will not be published.

1 + ten =