The new year doesn't have a moniker yet, so I'm dubbing 2013 the Year of Big Data. Big Data is saturating business news releases, is the topic of conventions and trade shows, and will turn the security and IT business on its ear, if the hype is accurate.
I don't mean to imply the hype isn't well deserved, and Big Data is no doubt influencing many things I do day to day. It still may be too early to ask this question, but I'm going to anyway: If the data is so good, why are so many business decisions so bad?
Here's one example of Big Data promotion, chock full of "majors" and "dramatics":
RSA, The Security Division of EMC Corporation (NYSE: EMC), today released a Security Brief asserting that Big Data will be a driver for major change across the security industry and will fuel intelligence-driven security models. Big Data is expected to dramatically alter almost every discipline within information security.
The new Brief predicts Big Data analytics will likely have market-changing impact on most product categories in the information security sector by 2015, including SIEM, network monitoring, user authentication and authorization, identity management, fraud detection, governance, risk and compliance systems.
Big Data is also thought to be the salvation of the supply chain. Real-time demand information and social media networking are supposed to give vendors the best possible information for forecasting. Yet, the tech industry -- which I'd expect to be early adopters of Big Data -- missed the mark on fourth quarter PC demand; hasn't come up with any groundbreaking consumer technology; and hasn't been the salvation of the enterprise market yet. Both Hewlett-Packard and Dell are staking a claim in the data management/data security markets.
Toward the end of last year, IHS reported that semiconductor supplies in the channel were growing, in part because of deficient forecasting:
The result on the whole is that chip suppliers aren't running their manufacturing operations optimally, and also are manufacturing products solely based on historical demand. In some instances, projected demand also does not materialize, adding to the already slow-moving inventory pile.
Historical demand is a better metric than forecasting, I suppose, but only when demand cycles are following historic norms. That wasn't the case throughout most of 2012.
So I've been wondering whether Apple's decision to cut component orders is a signal that all of this data collection is working, or if it isn't. Apple reportedly cut orders because of softness in iPhone 5 demand. One could argue Apple's ahead of the curve by cutting orders rather than shipments. Then again, it could have anticipated a slowdown after the holiday season, and seen that competitors' products were selling pretty well.
I think the supply chain will learn more about how well data is being used as fourth quarter earnings get released. And maybe by the end of this year, we'll see evidence that Big Data is making a difference. What do you think?
@EREBUS, @DOCDIVAR: I agree with your statements that the validation is still lacking. To me, we are only at the beginning of the Big Data era and until it fulfills its promises, it will take a while. Looking back into my career, the technologies mentioned in the article referenced by docdivacar were already a topic of discussion and partly in action in the telecom business 15 years ago. The level of big data adoption really depends on the industry you are looking at.
I am currently working for IT solutions in semiconductor (and alike) R&D. Looking there even the collection step is still in its infancy in many organizations. Data is collected in all kinds of data silos (file / MS Sharepoint servers, MS Excel, single purpose databases, …) and cleaning, arranging and analysis is pretty much down to the research engineers. Most of the time even the relations of the data points are not properly archived or data pieces are missing, therefore gaining information from R&D experiments is a challenge. Although solutions for the first steps in your mentioned action sequence are available, people are oftentimes hesitant to adopt and stick with their 25+ years old methodologies.
To me big data does depend as much on the willingness to walk the first steps as on the technology. Without early adopters there is no progress in technology. So beside solving the technological challenge we need to educate people to overcome the inertia of adopting new ways of working and thinking while at the same time not to blindly trust the new technologies alone!
@EREBUS, I agree one needs to understand, analyze, generate usable metrics, provide decision making and prognostics from big data. Much of the noise in big data has been in the collection, storage, access and (some) analytics but the rest are lacking, pitifully!
Readers may be interested in the link on the topic:
Big Data for Smaller Providers –Part 1
For big data to work, you need to understand the data collected, learn to process the data for information, validate the data transformation and then use the data to accurately predict a future event.
I have watched the progress of the technology for the last three decades and I am not convinced that any of the factors I highlighted are ready for real use.
Yes, we are collecting a lot of data, but there is such a rush to show progress that the validation of results is being ignored.
Before Big Data becomes useful, it has to develop a level of trust in its results.
I do not see that happening any time soon.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.