With machine learning, you can finally make politicians say what you want them to

Researchers have found a way to create (and manipulate) 3D digital models of well-photographed people

Anyone who's watched a political debate has probably wished they could influence the words coming out of a candidate's mouth. Now, machine learning is making that possible -- at least to some extent.

Researchers at the University of Washington have found a way to create fully interactive, 3D digital personas from photos albums and videos of famous people such as Tom Hanks, Barrack Obama, Hillary Clinton and George W. Bush. Equipped with those 3D models, they could then impose another person's voice, expressions and sentiments on them, essentially rendering the models as 3D digital puppets.

“Imagine being able to have a conversation with anyone you can’t actually get to meet in person -- LeBron James, Barack Obama, Charlie Chaplin -- and interact with them,” said Steve Seitz, a UW professor of computer science and engineering.

To construct such personas, the team used machine learning algorithms to mine 200 or so Internet images taken over time of a particular person in various scenarios and poses. They then developed techniques to capture expression-dependent textures  -- small differences that occur when a person smiles or looks puzzled or moves his or her mouth, for example.

By manipulating the lighting conditions across different photographs, they developed a new approach to densely map the differences from one person’s features and expressions onto another person’s face, making it possible to “control” the digital model with a video of another person.

The video below explains more about the research. 

“How do you map one person’s performance onto someone else’s face without losing their identity?” said Seitz. “That’s one of the more interesting aspects of this work. We’ve shown you can have George Bush’s expressions and mouth and movements, but it still looks like George Clooney.”

The technology relies on advances in 3D face reconstruction, tracking, alignment, multi-texture modeling and puppeteering that have been developed over the last five years by a research group led by UW assistant professor of computer science and engineering Ira Kemelmacher-Shlizerman. The results will be presented next week in a paper at the International Conference on Computer Vision in Chile.

The research was funded by Samsung, Google, Intel and the University of Washington.

Join the Good Gear Guide newsletter!

Error: Please check your email address.

Our Back to Business guide highlights the best products for you to boost your productivity at home, on the road, at the office, or in the classroom.

Keep up with the latest tech news, reviews and previews by subscribing to the Good Gear Guide newsletter.

Katherine Noyes

IDG News Service
Show Comments

Most Popular Reviews

Latest News Articles


GGG Evaluation Team

Kathy Cassidy


First impression on unpacking the Q702 test unit was the solid feel and clean, minimalist styling.

Anthony Grifoni


For work use, Microsoft Word and Excel programs pre-installed on the device are adequate for preparing short documents.

Steph Mundell


The Fujitsu LifeBook UH574 allowed for great mobility without being obnoxiously heavy or clunky. Its twelve hours of battery life did not disappoint.

Andrew Mitsi


The screen was particularly good. It is bright and visible from most angles, however heat is an issue, particularly around the Windows button on the front, and on the back where the battery housing is located.

Simon Harriott


My first impression after unboxing the Q702 is that it is a nice looking unit. Styling is somewhat minimalist but very effective. The tablet part, once detached, has a nice weight, and no buttons or switches are located in awkward or intrusive positions.

Featured Content

Latest Jobs

Don’t have an account? Sign up here

Don't have an account? Sign up now

Forgot password?