Google's latest big-data tool, Mesa, aims for speed

Mesa can hold petabytes of data across multiple servers while fielding millions of updates and queries per day

Google has found a way to stretch a data warehouse across multiple data centers, using an architecture its engineers developed that could pave the way for much larger, more reliable and more responsive cloud-based analysis systems.

Google researchers will discuss the new technology, called Mesa, at the Conference on Very Large Data Bases, happening next month in Hangzhou, China.

A Mesa implementation can hold petabytes of data, update millions of rows of data per second and field trillions of queries per day, Google says. Extending Mesa across multiple datacentres allows the data warehouse to keep working even if one of the data centers fails.

Google built Mesa to store and analyze critical measurement data for its Internet advertising business, but the technology could be used for other, similar data warehouse jobs, the researchers said.

"Mesa ingests data generated by upstream services, aggregates and persists the data internally, and serves the data via user queries," the researchers wrote in a paper describing Mesa.

For Google, Mesa solved a number of operational issues that traditional enterprise data warehouses and other data analysis systems could not.

For one, most commercial data warehouses do not continuously update the data sets, but more typically update them once a day or once a week. Google needed its streams of new data to be analyzed as soon as they were created.

Google also needed a strong consistency for its queries, meaning a query should produce the same result from the same source each time, no matter which data center fields the query.

Consistency is typically considered a strength of relational database systems, though relational databases can have a hard time ingesting petabytes of data. It's especially hard if the database is replicated across multiple severs in a cluster, which enterprises do to boost responsiveness and uptime. NoSQL databases, such as Cassandra, can easily ingest that much data, but Google needed a greater level of consistency than these technologies can typically offer.

The Google researchers said that no commercial or existing open-source software was able to meet all of its requirements, so they created Mesa.

Mesa relies on a number of other technologies developed by the company, including the Colossus distributed file system, the BigTable distributed data storage system and the MapReduce data analysis framework. To help with consistency, Google engineers deployed a homegrown technology called Paxos, a distributed synchronization protocol.

In addition to scalability and consistency, Mesa offers another advantage in that it can run be run on generic servers, which eliminates the need for specialized, expensive hardware. As a result, Mesa can be run as a cloud service and easily scaled up or down to meet the job requirements.

Mesa is the latest in a series of novel data-processing applications and architectures that Google has developed to serve its business.

Some Google innovations have gone on to provide the foundations for widely used applications. For example, BigTable led to the development of Apache Hadoop.

Other Google technologies developed for internal use have subsequently been offered as cloud services from the company itself. Google's Dremel ad-hoc query system for read-only data went on to become a foundation of the company's BigQuery service.

Future commercial prospects for Mesa may be somewhat limited, however, said Curt Monash, head of database research firm Monash Research.

Not many organizations today would need sub-second response times against a body of material as large and complex as Google's, Monash said in an email. Also, MapReduce is not the most efficient way of handling relational queries. That's what's led to a number of SQL-on-Hadoop technologies, such as Hive, Impala and Shark.

Also, typical enterprises should look for commercial or open-source options to keep their data warehouses consistent across data centers before adopting what Google's developed, Monash said. Most new data stores being developed today have some form of multi-version currency control (MVCC), he said.

Joab Jackson covers enterprise software and general technology breaking news for The IDG News Service. Follow Joab on Twitter at @Joab_Jackson. Joab's e-mail address is

Join the Good Gear Guide newsletter!

Error: Please check your email address.

Tags applicationsGooglesoftwaredata warehousing

Our Back to Business guide highlights the best products for you to boost your productivity at home, on the road, at the office, or in the classroom.

Keep up with the latest tech news, reviews and previews by subscribing to the Good Gear Guide newsletter.

Joab Jackson

IDG News Service
Show Comments

Most Popular Reviews

Latest News Articles


GGG Evaluation Team

Kathy Cassidy


First impression on unpacking the Q702 test unit was the solid feel and clean, minimalist styling.

Anthony Grifoni


For work use, Microsoft Word and Excel programs pre-installed on the device are adequate for preparing short documents.

Steph Mundell


The Fujitsu LifeBook UH574 allowed for great mobility without being obnoxiously heavy or clunky. Its twelve hours of battery life did not disappoint.

Andrew Mitsi


The screen was particularly good. It is bright and visible from most angles, however heat is an issue, particularly around the Windows button on the front, and on the back where the battery housing is located.

Simon Harriott


My first impression after unboxing the Q702 is that it is a nice looking unit. Styling is somewhat minimalist but very effective. The tablet part, once detached, has a nice weight, and no buttons or switches are located in awkward or intrusive positions.

Featured Content

Latest Jobs

Don’t have an account? Sign up here

Don't have an account? Sign up now

Forgot password?