The FTC is worried about algorithmic transparency, and you should be too

Don't assume your Facebook friends are ignoring you -- it could simply be the site's algorithms at work

It's no secret that algorithms power much of the technology we interact with every day, whether it's to search for information on Google or to browse through a Facebook news feed. What's less widely known is that algorithms also play a role when we apply for a loan, for example, or receive a special marketing offer.

Algorithms are practically everywhere we are today, shaping what we see, what we believe and, to an increasing extent, what our futures hold. Is that a good thing? The U.S. Federal Trade Commission, like many others, isn't so sure.

"Consumers interact with algorithms on a daily basis, whether they know it or not," said Ashkan Soltani, the FTC's chief technologist. "To date, we have very little insight as to how these algorithms operate, what incentives are behind them, what data is used and how it's structured."

A few weeks ago the FTC's Bureau of Consumer Protection established a brand-new office dedicated to increasing that understanding. It's called the Office of Technology Research and Investigation, and algorithmic transparency is one of the issues it will be focusing on, by supporting external research as well as conducting studies of its own.

The idea is to better understand how these algorithms work -- what assumptions underlie them and what logic drives their results -- with an eye toward ensuring that they don't enable discrimination or other harmful consequences.

The term "algorithm" can seem intimidating for those not well acquainted with computer programming or mathematics, but in fact "a good synonym for 'algorithm' is simply 'recipe,'" said Christian Sandvig, a professor in the School of Information at the University of Michigan. "It's just a procedure to accomplish a task."

The concern arises when the algorithms guiding aspects of our lives produce results we don't want. The potential examples are numerous. Some of them are fairly clear-cut: discrimination in credit, housing, labor and jobs, for example, or unfair pricing practices.

In what's become a classic illustration, one 2013 Harvard study found that Google searches on "black-sounding" names such as Trevon Jones were more likely to generate ads for public-records search services suggesting that the person in question had an arrest record.

Google did not respond to a request to comment for this story.

Other examples are more subtle.

"One of the problems is that algorithms are increasingly mediating the media and information that we're exposed to, which can have implications for things like politics," said Nick Diakopoulos, a professor in the University of Maryland's College of Journalism. "We know, for example, that simply increasing the amount of hard news in the Facebook news feed can result in a larger number of people turning out to vote."

Algorithms in the media are also increasingly used to facilitate potential censorship decisions, Diakopoulos noted, such as when automated systems help moderators filter and screen online comments and determine what is considered valid commentary versus what should not be published at all.

Then, too, there are companies such as Automated Insights and Narrative Science producing news stories at scale "based on nothing more than structured data inputs," he said. Automated Insights, for instance, recently announced that it is producing and publishing 3,000 earnings stories per quarter for the Associated Press, all automatically generated from data.

Besides being a shining example of the stuff journalists' nightmares are made of, that scenario is also associated with a host of accuracy problems. "A quick search on Google shows that the thousands of Automated Insights earning reports are also yielding a range of errors, leading to corrections being posted," Diakopoulos said. "What if these were market-moving errors? What is the source of those errors: the data, or the algorithm?"

Algorithms can even lead technology users to think and behave differently than they would otherwise.

"Say I notice that one of my posts on Facebook gets no likes" Sandvig explained. Whereas the likely explanation is that Facebook's algorithm simply filtered the post out of friends' news feeds, "we found that people will sometimes assume it's because their friends don't want them to post about that topic, and so they won't post about it ever again," he said.

What may seem to its creator like a fairly straightforward filter, in other words, could quickly snowball into something much bigger that changes people's behavior as well.

So what can be done about all this?

Efforts such as the FTC's to increase transparency is one approach.

"One possibility is that companies will need to start issuing transparency reports on their major algorithms, to benchmark things like error rates, data quality, data use and emergent biases," Diakopoulos said.

Another possibility is that new user interfaces could be developed that give end users more information about the algorithms underlying the technologies they interact with, he suggested.

Regulation may also need to be part of the picture, particularly when it comes to ensuring that elections cannot be manipulated at scale algorithmically, he said.

"Government involvement will be very important," agreed Sandvig. "Illegal things are going to happen with and without computers, and the government needs to be able to handle it either way. It's not an expansion of government power -- it's something that's overdue."

Sandvig isn't convinced that increased transparency will necessarily help all that much, however. After all, even if an algorithm is made explicit and can be inspected, the light it will shed on potential consequences may be minimal, particularly when the algorithm is complicated or performs operations on large sets of data that aren't also available for inspection.

Rather than transparency, Sandvig's preferred solution focuses instead on auditing -- systematic tests of the results of the algorithms, rather than the algorithms themselves, to assess the nature of their consequences.

"In some areas, we're not going to be able to figure out the processes or the intent, but we can see the consequences," he said.

In the area of housing, for example, it may be difficult to fully understand the algorithms at work behind loan decisions; much easier and much more diagnostic would be an examination of the results, such as, are people of all races getting mortgages in all neighborhoods?

It's clearly early days in terms of figuring out the best approaches to the potential problems involved here. Whichever strategies ultimately get adopted, though, the important thing now is to be mindful of the social consequences, the FTC's Soltani said.

"A lot of times the tendency is to let software do its thing," he said. "But to the degree that software reinforces biases and discrimination, there are still normative values at stake."

Join the Good Gear Guide newsletter!

Error: Please check your email address.

Tags marketingU.S. Federal Trade CommissionAutomated InsightsregulationChristian SandvigUniversity of MarylandUniversity of Michiganindustry verticalsinternetAshkan SoltaniNarrative ScienceFacebookNick DiakopoulosGooglesoftwaregovernmentsearch engines

Our Back to Business guide highlights the best products for you to boost your productivity at home, on the road, at the office, or in the classroom.

Keep up with the latest tech news, reviews and previews by subscribing to the Good Gear Guide newsletter.

Katherine Noyes

IDG News Service
Show Comments

Cool Tech

Crucial Ballistix Elite 32GB Kit (4 x 8GB) DDR4-3000 UDIMM

Learn more >

Gadgets & Things

Lexar® Professional 1000x microSDHC™/microSDXC™ UHS-II cards

Learn more >

Family Friendly

Lexar® JumpDrive® S57 USB 3.0 flash drive 

Learn more >

Stocking Stuffer

Plox Star Wars Death Star Levitating Bluetooth Speaker

Learn more >

Christmas Gift Guide

Click for more ›

Most Popular Reviews

Latest News Articles

Resources

GGG Evaluation Team

Kathy Cassidy

STYLISTIC Q702

First impression on unpacking the Q702 test unit was the solid feel and clean, minimalist styling.

Anthony Grifoni

STYLISTIC Q572

For work use, Microsoft Word and Excel programs pre-installed on the device are adequate for preparing short documents.

Steph Mundell

LIFEBOOK UH574

The Fujitsu LifeBook UH574 allowed for great mobility without being obnoxiously heavy or clunky. Its twelve hours of battery life did not disappoint.

Andrew Mitsi

STYLISTIC Q702

The screen was particularly good. It is bright and visible from most angles, however heat is an issue, particularly around the Windows button on the front, and on the back where the battery housing is located.

Simon Harriott

STYLISTIC Q702

My first impression after unboxing the Q702 is that it is a nice looking unit. Styling is somewhat minimalist but very effective. The tablet part, once detached, has a nice weight, and no buttons or switches are located in awkward or intrusive positions.

Featured Content

Latest Jobs

Don’t have an account? Sign up here

Don't have an account? Sign up now

Forgot password?