Opinion

FORMER GOOGLE ENGINEER: How Google Discriminates Against Conservatives

(Photo by Sean Gallup/Getty Images)

Mike Wacker Contributor
Font Size:

How does Google discriminate against conservatives? As a software engineer who used to work for Google, I could point to many examples, but let’s start with YouTube’s “Restricted Mode.” Targeted at libraries, schools, and public institutions, Restricted Mode filters out “videos containing potentially adult content,” effectively censoring videos in places where it’s enabled.

In October 2016, several publications wrote about how Restricted Mode worked to hide educational videos from conservatives. A few years later, a Google vice president would not even concede to Congress that a PragerU video on the Ten Commandments had been restricted by mistake. The explanation was so absurd — Google said the video’s discussion of the commandment against murder constituted a “reference” to murder “and potentially Nazism” — that founder Dennis Prager joked he would re-release the video as the “Nine Commandments” to remove the commandment against murder.

In March 2017, publications such as TechCrunch, Gizmodo and The Guardian wrote about how Restricted Mode was hiding educational videos from the LGBT community. A few days later, YouTube responded on Twitter: “Sorry for all the confusion with Restricted Mode. Some videos have been incorrectly labeled and that’s not right. We’re on it! More to come.”

Sadly, this example fits a larger pattern of discrimination: when the same problem affects multiple demographics, Google only fixes the problem for some demographics, often ignoring other demographics such as conservatives (and also Christians). (RELATED: Think Google Controls The News? It’s Worse Than You Think, Experts Say)

In June 2017, Google was asked whether it welcomed conservative perspectives during its annual board meeting. Then-chairman, Eric Schmidt, responded that Google was founded under the principles of freedom of expression, diversity, inclusion and science-based thinking.

To that point, Google’s diversity curriculum does cite a number of scientific studies on racial or gender bias. Interestingly enough, similar studies using the same techniques have also provided scientific evidence for political bias; one such study found that political bias distorted the evaluation of resumes.

Since then, additional studies have discovered that learning others’ political beliefs can negatively impact your ability to evaluate their expertise in nonpolitical domains. For a company that emphasizes science-based thinking, you would think that Google would start promoting those studies in its diversity programs. 

Instead, they frequently promote far-left theoretical approaches such as intersectionality and critical theory, citing “experts” like Robin DiAngelo. One Google-endorsed talk promoted DiAngelo’s book “White Fragility.” In that book, DiAngelo wrote, “Whites control all major institutions of society and set the policies and practices that others must live by. Although rare individual people of color may be inside the circles of powers⁠ — Colin Powell, Clarence Thomas, Marco Rubio, Barack Obama⁠ — they support the status quo and do not challenge racism in any way significant enough to be threatening.”

During my first week at Google, Google became the first major tech company to publicly release its diversity data for racial and gender diversity. When I and others started pushing Google to collect and publish data on viewpoint diversity, though, we consistently ran into a brick wall. (RELATED: PayPal Partnership With SPLC Marks Era Of Censorship)

Google’s own book, “How Google Works,” says that you need data: “You cannot be gender-, race-, and color-blind by fiat; you need to create empirical, objective methods to measure people. Then the best will thrive, regardless of where they’re from and what they look like.” Yet time and time again, Google has taken the exact opposite approach on political diversity, declaring that they are politically neutral by fiat, yet never providing the data to prove it. Even Google CEO Sundar Pichai took this approach when he testified before Congress.

Google’s selective application of its principles has become so endemic that it affects Google’s products as well, including the very definition of algorithmic unfairness: “unjust or prejudicial treatment of people that is related to sensitive characteristics such as race, income, sexual orientation, or gender, through algorithmic systems or algorithmically aided decision-making.”

In congressional testimony, one Google director said, “We build for everyone, including every single religious belief, every single demographic, every single region, and certainly every political affiliation.” Google’s definition of algorithmic unfairness, which was leaked to Project Veritas, tells a different story. Google does not build products for “every single demographic.” Instead, it selectively builds products for demographics with “sensitive characteristics.”

At Google, all demographics are equal, but some demographics are more equal than others.

Of course, that leads to this inevitable question: what characteristics are considered sensitive? The leaked document later says that sensitive characteristics “are especially likely to include characteristics that are associated with less privileged or marginalized populations.” For those who are familiar with the language of intersectionality, you know what that means.

If you are treated unfairly because of your race, that would qualify as algorithmic unfairness. If you are treated unfairly because you are a conservative, that would not qualify as algorithmic unfairness. If you are treated unfairly because of your religion, it would depend on whether it’s a “privileged” religion such as Christianity or a “marginalized” religion such as Islam. (RELATED: How Major News Organizations, Universities And Businesses Surrender Their Privacy To Google)

Yes, it’s true that if you’re training a machine to recognize faces, but your training data lacks racial diversity, your product will perform worse with minorities. It’s also true that if you’re training a machine to detect hate speech, but your training data lacks political diversity, your product will perform worse on conservative content. 

Both are legitimate problems, but the decision of which problems meets Google’s definition of algorithmic unfairness, of which problems are more equal than others, is a decision that’s made by Google’s employees, not by Google’s algorithms.

Google’s “Machine Learning Fairness” initiative highlights a number of vanguard projects, including one familiar project: Restricted Mode. If Restricted Mode runs into a problem that affects women, minorities, or people who use the pronoun “zie,” you can rest assured that Google will fix it. 

If you’re Dennis Prager, though, it looks like you’ll have to keep avoiding the Ten Commandments.

Mike Wacker (@M_Wacker) worked for Google as a software engineer for five years, and previously worked for Microsoft. He graduated from Cornell University in 2010.

The views and opinions expressed in this commentary are those of the author and do not reflect the official position of The Daily Caller.