Intel: 2-year-old Nvidia GPU outperforms 3.2GHz Core i7

Intel researchers set out to disprove GPUs are more powerful than CPUs by factor of 100

Intel researchers have published the results of a performance comparison between their latest quad-core Core i7 processor and a two-year-old Nvidia graphics card, and found that the Intel processor can't match the graphics chip's parallel processing performance.

On average, the Nvidia GeForce GTX 280 -- released in June 2008 -- was 2.5 times faster than the Intel 3.2GHz Core i7 960 processor, and more than 14 times faster under certain circumstances, the Intel researchers reported in the paper, called "Debunking the 100x GPU vs. CPU myth: An evaluation of throughput computing on CPU and GPU."

In a bid to discredit claims that GPUs outperform Intel's processors by a factor of 100, researchers compared the performance of the quad-core Core i7 processor with the Nvidia GPU running a set of 14 throughput computing kernels. The comparison was designed to test the parallel processing capabilities of the chips.

As its name suggests, parallel processing involves tackling multiple tasks simultaneously as opposed to serial processing, which requires handling tasks in sequential order.

Graphics chips, with dozens of cores that are used to draw polygons and map textures used to create realistic images on a computer screen, are well-adapted to parallel processing tasks while processors with fewer, more powerful cores, like the Core i7, are better suited for serial processing applications. That's not to say that quad-core chips like the Core i7 can't handle parallel processing tasks; they can, just not as well as GPUs like the GTX280, as the Intel study confirmed.

"It's a rare day in the world of technology when a company you compete with stands up at an important conference and declares that your technology is only up to 14 times faster than theirs," wrote Andy Keane, Nvidia's general manager of GPU computing, on the company's blog, which provided a link to the Intel paper.

Even so, Keane wasn't impressed by the performance margin reported by Intel, listing 10 Nvidia customers that saw application performance improve by a factor of 100 or more by optimizing them to run on GPUs. The performance comparison done by Intel likely did not include the software optimization required to get the best performance from the GPU, he said, noting that Intel didn't provide details of the software code used in the comparison.

"It wouldn't be the first time the industry has seen Intel using these types of claims with benchmarks," he wrote, providing a link to the U.S. Federal Trade Commission antitrust suit filed against Intel in 2009.

In that suit, the FTC alleged previous benchmark results reported by Intel "were not accurate or realistic measures of typical computer usage or performance, because they did not simulate 'real world' conditions."

Regardless of the exact performance difference between CPUs and GPUs, graphics chips are an increasingly common feature in high-performance computing systems, including extremely powerful computers like China's Nebulae system. Nebulae, which is currently the world's second most powerful computer, is powered by a combination of Xeon server chips and Nvidia GPUs.

Adding GPUs to a system can substantially increases performance, while reducing cost and power consumption compared to systems built using only CPUs, said Yury Drozdov, CEO of Singapore-based server maker Novatte.

Last year, Novatte built a system for a financial customer that wanted to run pricing models. The system, which cost more than US$1 million, used 60 Intel Xeon processors and 120 Nvidia GPUs. A system with similar performance built using Xeon processors alone would cost $1.6 million and consume nearly 28 percent more power, making it more costly to operate than the system built with GPUs, Drozdov said.

For its part, Intel recognizes the importance of having a powerful parallel processing chip in its product lineup to complement its CPU line. In May, Intel announced the development of a 50-core chip called Knights Corner, which the company hopes will fend off competition from graphics chip makers in the high-performance computing space. Intel has not said when Knights Corner will be available.

By comparison, Nvidia's GTX280 has 240 processor cores, while the company's recently announced Tesla M20 series GPUs have 448 cores.

Join the Good Gear Guide newsletter!

Error: Please check your email address.

Tags Graphics boardsnvidiaComponentsintelprocessors

Struggling for Christmas presents this year? Check out our Christmas Gift Guide for some top tech suggestions and more.

Keep up with the latest tech news, reviews and previews by subscribing to the Good Gear Guide newsletter.

Sumner Lemon

IDG News Service

Most Popular Reviews

Follow Us

Best Deals on GoodGearGuide

Shopping.com

Latest News Articles

Resources

GGG Evaluation Team

Kathy Cassidy

STYLISTIC Q702

First impression on unpacking the Q702 test unit was the solid feel and clean, minimalist styling.

Anthony Grifoni

STYLISTIC Q572

For work use, Microsoft Word and Excel programs pre-installed on the device are adequate for preparing short documents.

Steph Mundell

LIFEBOOK UH574

The Fujitsu LifeBook UH574 allowed for great mobility without being obnoxiously heavy or clunky. Its twelve hours of battery life did not disappoint.

Andrew Mitsi

STYLISTIC Q702

The screen was particularly good. It is bright and visible from most angles, however heat is an issue, particularly around the Windows button on the front, and on the back where the battery housing is located.

Simon Harriott

STYLISTIC Q702

My first impression after unboxing the Q702 is that it is a nice looking unit. Styling is somewhat minimalist but very effective. The tablet part, once detached, has a nice weight, and no buttons or switches are located in awkward or intrusive positions.

Latest Jobs

Shopping.com

Don’t have an account? Sign up here

Don't have an account? Sign up now

Forgot password?