First impression on unpacking the Q702 test unit was the solid feel and clean, minimalist styling.
IBM speeds up data analysis with new algorithm
- — 26 February, 2010 07:09
IBM researchers have developed a new algorithm that could in minutes analyze terabytes' worth of raw data to more quickly predict weather and electricity usage, the company said Thursday.
The mathematical algorithm, developed by IBM's laboratories in Zurich, can sort, correlate and analyze millions of random data sets, a task that could otherwise take days for supercomputers to process, said Costas Bekas, a researcher at IBM.
The algorithm is just under a thousand lines of code and will be instrumental in establishing usage patterns or trends based on data gathered from sources such as sensors or smart meters, he said. The algorithm could be used to analyze a growing mass of data measuring electricity usage trends as well as air or water pollution levels. The algorithm could also break down data from global financial markets and assess individual and collective exposure to risk, Bekas said.
"We are interested in measuring the quality of data," Bekas said. Efficient analysis of large data sets requires new mathematical techniques that reduce computational complexity, Bekas said.
The algorithm combines models of data calibration and statistical analysis that can assess measurement models and hidden relationships between data sets. IBM has been working on the research for two years, Bekas said.
The algorithm can also reduce the cost burden on companies by analyzing data in a more energy-efficient way, Bekas said. The lab used a Blue Gene/P Solution system at the Forschungszentrum Julich research center in Germany to validate 9TBs of data in less than 20 minutes. Analyzing the same amount of data without the algorithm would have taken a day with the supercomputer operating at peak speeds, which would have added up to higher electricity bills, Bekas said.
According to Top500.org, the Blue Gene/P is the fourth-fastest supercomputer in the world as of last November, with 294,912 IBM Power processing cores that can provide peak performance of up to 1 petaflop.
The traditional approach to data analysis is to take multiple data sets and look at them individually, said Eleni Pratsini, manager of mathematical and computational sciences at the IBM research labs. However, the algorithm compares data sets against each other, which could help enterprises point toward larger trends in particular areas, such as risk reduction in financial portfolios.
Enterprises will want faster ways of generating business intelligence as masses of data flood servers with the expansion of computing to new devices, he said.
Now that the algorithm has been proven to work scientifically, the research lab is collaborating with IBM's Global Services unit to use it for specific services, Pratsini said. Ultimately, the algorithm could make its way to IBM applications such as the SPSS statistical analysis software, but the company didn't provide a specific time frame for that.