Why are traditional architectures unsuitable for advanced analytics on big data?
The continuing explosion in data sources and volumes strains and exceeds the scalability of traditional data management and analytic architectures. The 20-year-old legacy architecture for data management and analytics is inherently unfit for scaling to today’s big data volumes because it relies on moving data from where it is stored to where it is processed. As data volumes continue to grow, pulling terabytes and more of data through the data pipeline to analytic applications for processing results in too much latency, cost, and overhead, making it impossible to include full data sets and fresh data in timely analysis. At the same time, a new generation of analytics has emerged with requirements that are difficult or impossible for traditional architectures to meet: analysis of large data volumes, ultra-fast results, and deep data exploration through ad-hoc and interactive analysis. However, the limitations of traditional data pipelines and of SQL make it difficult or impossible to per