Content

/

Blog Posts

/

In Algorithms We Trust? The Story of Google Search

blog posts

In Algorithms We Trust? The Story of Google Search

March 4, 2020

The featured image for a post titled "In Algorithms We Trust? The Story of Google Search"

The following post was written by Mike Wacker, a software developer and ex-Google engineer based in Washington State.

"Right now, if you Google the word 'idiot' under images, a picture of Donald Trump comes up. [...] How would that happen?"

When Rep. Zoe Logren (D-CA) asked this question to Google CEO Sundar Pichai in December of 2018, Sundar claimed in sworn testimony that these results were produced by Google's objective algorithms. Rep. Lofgren then sarcastically asked if "some little man sitting behind the curtain figuring" was making these decisions, and Sundar responded, "We don't manually intervene on any particular search result."

In the year that followed, people would learn what really happened behind the curtain. In January, Breitbart proved that YouTube does in fact manually intervene; it took only one complaint from a Slate writer to change the search results for abortion. In April, the Daily Caller uncovered some manual blacklists that Google used for some of its search results. Finally, in November, the Wall Street Journal ripped open that curtain with its own extensive investigation, "How Google Interferes With Its Search Algorithms and Changes Your Results."

Algorithms: The "What" vs. The "Who"

In many ways, the debate over Google's search results folds into a much larger debate over the trustworthiness of algorithms. On one side, you have Google's claim, to quote the Journal, "that the algorithms are objective and essentially autonomous, unsullied by human biases or business considerations." On the other side, you have a former Google engineer who said, "[The algorithms] don't write themselves, we write them to do what we want them to do."

This debate has played out time and time again in many different contexts: the debate over whether it's the "what" that matters—the objective, mechanical algorithms—or the "who" that matters—the subjective, biased humans who create these algorithms.

A 1998 Case Study: Google's PageRank Algorithm

Although Google search works differently today compared to how it worked in 1998, Google's original PageRank algorithm from 1998 is worth studying. Ranking web pages may appear to be an "inherently subjective" process, but this algorithm described "a method for rating Web pages objectively and mechanically" using a simple technique: random walks. At each step, PageRank would select a new page to visit by following a random link on the current page. After many random walks, the pages with the most visits received the highest rankings.

As simple as this algorithm seems, it produced really good rankings in practice, and nobody could argue that PageRank was biased. Users trusted this algorithm.

And what about the claim that algorithms "do what we want them to do"? PageRank did reflect the beliefs of its creators, but it was the belief that "democracy on the web works." Here, the "who" that mattered was the citizens of the Internet and their web pages. PageRank used the links from one user's web page to another user's web page, treating these links as votes.

PageRank was not just any algorithm; it was an organic algorithm. Organic algorithms accomplish tasks such as ranking by using the natural structure of the Internet: links from one page to another, the keywords on a page, the freshness of a page, etc. More importantly, they don't use the subjective judgment of some Google employee sitting behind the curtain.

How Humans Decide, and How Algorithms Decide

What is more important: education or experience? If you ask a human this question, you will often get this answer: if the candidate I like has a lot of education, then education is more important. If the candidate I like has a lot of experience, then experience is more important.

In his book The Righteous Mind, moral psychologist Jonathan Haidt described it this way: "Intuitions come first, strategic reasoning second." Humans, as Haidt noticed, employ motivated reasoning by default. Their intuition actually makes the decision, and then the rational mind looks for some piece of evidence to support that intuition.

Algorithms work differently. An algorithm is a recipe or a set of instructions to carry out some task, but these instructions have to be so precise that they leave no room for human intuition or guesswork. An instruction to cook for 10 minutes would be an algorithmic instruction, while an instruction to cook until golden brown would not. Algorithms can't rely on intuition, because computers have no intuition of their own; they can only follow instructions.

Decisions Without Intuition

So how do you counteract the potential biases of human intuition? One study by Yale psychologists found a simple way to do it: ask them whether education or experience is more important before they evaluate a candidate. When you define the decision-making criteria before you make any decisions, humans will apply the same criteria for every candidate.

For algorithms that can't use intuition at all, the only option is to define the decision-making criteria first. A human still has to decide whether education or experience is more important, and they need to write that preference into the algorithm. Once the algorithm takes over, though, it will apply the same set of rules for everyone. Regardless of whether those rules are right or wrong, algorithms at least make everyone play by the same rules.

When humans rank web pages, a Democrat's political bias will likely come into play when they rank pro-Trump web pages—though via motivated reasoning, they will always be able to supply some "objective" reason to support their decision. When an algorithm such as PageRank ranks web pages, however, it applies the same rules for every page it visits.

Watching for Web Spam

Goodhart's law states, "When a measure becomes a target, it ceases to be a good measure." In other words, the more important Google's search rankings become to our everyday lives, the more people will try to game the system to artificially inflate their rankings.

A user can improve the ranking of their page in one of two ways. They could create a useful web page, and as other people start organically linking to that page, its ranking will improve. Or, they could create a bunch of fake web pages, web pages that contain a link to their web page. That second, non-organic technique is called web spam.

PageRank's rankings are supposed to reflect the organic structure of the Internet, but problems ensue when people try to manipulate the Internet in non-organic ways. Some of the most famous examples of this problem have their own name: Google bombs. Remember when the top search result for "miserable failure" was the White House page for President Bush?

Watching the Watchers

While search engines must watch out for web spam, Google bombs, and similar problems, how you solve this problem matters just as much as if not more than the problem itself. The moment humans become a part of the solution, a new problem is introduced: the age-old question of who watches the watchers. If you give a human the authority to blacklist a site because it's web spam, you also give them the potential to abuse this authority for their own motivated reasons.

For this reason, algorithmic solutions are vastly preferred. In fact, the Wall Street Journal's investigation of Google revealed that in 2004, Google co-founder Sergey Brin opposed ramping up efforts to fight web spam because the solution would have involved more human intervention. When Google managed to defuse those Google bombs in 2007, the solution they developed was, in their own words, "completely algorithmic."

When it's Not the Algorithm

Defusing those Google bombs took some time, but by employing an algorithmic solution free of human biases, Google was able to maintain its hard-earned trust. When YouTube manually changed its search results for abortion mere hours after it received a single complaint from a Slate writer, it threw that trust into an incinerator.

One of the reasons why the Wall Street Journal's investigation of Google was so influential was that it undercut Google's best defense: the appeal to algorithm. Google had solidified this narrative that these choices were dictated entirely by the algorithms, but the Journal's investigation found compelling evidence of manual intervention on a scope previously unknown.

Of course, Google's spokesperson was always able to supply some motivated reason to defend Google's actions, but whatever that reason was, it missed the point entirely. To the extent that people trusted Google, they trusted its ability to solve hard problems with organic, algorithmic solutions. This whole debate between the "who" and the "what" changes dramatically when these decisions are no longer solely dictated by the algorithms.

(Another important distinction—one I won't cover here—is whether machine learning or AI is used. The nature of this debate also changes dramatically when those technologies are used.)

A 2018 Case Study: The Misrepresentation Blacklist

Consider the previously-mentioned blacklists uncovered by both the Daily Caller and the Wall Street Journal. Here, the "what" is now no longer an algorithm, but a policy: the Good Neighbor Policy and the Misrepresentation Policy. If you had only read that document—which I did when I was at Google—you would be hard-pressed to find any evidence of bias.

However, if you had only read that document, you would also have a hard time understanding why the American Spectator's website got blacklisted. To answer that question, you also have to look at "who" is enforcing this policy, and how they are interpreting and/or selectively enforcing this policy. Here, it was the Trust and Safety team that was manually making those decisions.

That explains a lot. These Trust and Safety teams exist not just at Google but throughout the tech industry, and they have developed a reputation as modern-day courts of Star Chamber. Even their name sounds as innocent as the Ministry of Truth from George Orwell's book 1984.

"Authoritative" Content: What Does This Mean?

When stories like these come out, a common theme you'll hear from Google's spokesperson is the need to promote "authoritative" content. But that begs the question: who and what is deciding whether content is authoritative? Is this decision being made organically by looking at what users on the web are doing? Or is this decision being made by the Trust and Safety team?

In the early days of Google, one of the biggest concerns is whether advertisers could buy a higher spot in the search rankings. To emphasize that Google sells advertising, not search results, Google created the honest results policy, which states, "Our results reflect what the online community believes is important, not what we or our partners think you ought to see."

For all this talk about fake news and misinformation, its begs the question: will Google develop algorithmic solutions to these problems, solutions that help search results organically reflect "what the online community believes is important"? Or will Google instead use manual blacklists and other non-organic solutions that instead show what "we [...] think you ought to see."

Explore More Policy Areas

InnovationGovernanceNational SecurityEducation
Show All

Stay in the loop

Get occasional updates about our upcoming events, announcements, and publications.