Politics

Google Triggers Criminal Investigation After Dad Takes Photos Of His Toddler, Naked, For Doctor

REUTERS/Peter DaSilva

Daily Caller News Foundation logo
Laurel Duggan Social Issues and Culture Reporter
Font Size:

Google triggered a criminal investigation and locked a man out of his accounts after flagging photos he sent to his son’s pediatrician as potential child pornography, according to The New York Times.

The tech company initially flagged the images when they were automatically uploaded to Google servers from the father’s phone, according to the NYT. After a nearly year-long investigation of everything in his Google account, including search history, location history, messages and photos, police determined he hadn’t committed a crime.

The father, referred to only as Mark by the NYT, had taken photos of his young son, naked, at the request of a doctor over concerns about his infected penis, according to the NYT. Google quickly locked him out of his account after scanning the photos, and he lost emails, contacts and personal photos and had to get a new phone number after losing access to his Google Fi account.

“I knew that these companies were watching and that privacy is not what we would hope it to be,” Mark, who happens to be a software engineer who had worked on a tech company’s tool for flagging problematic content, told the NYT. “But I haven’t done anything wrong.”

A similar story played out for a a father referred to only as Cassio, who was also flagged by Google after the company’s software flagged photographs he had taken of his son at the request of a pediatrician, according to the NYT. Those photos also triggered a scan after being automatically uploaded to Google servers, and Google locked Cassio out of his accounts in the middle of buying a house. (RELATED: CDC Worked Hand In Hand With Big Tech To Control The COVID Narrative, Emails Show)

Google only scans users’ photos when they take an affirmative action, a Google spokesman told the NYT, but that action can include the automatic uploading of user photos to Google Photos or other Google servers. After artificial intelligence scanners detect a red flag on a user’s photos, a human moderator looks over the photos to determine whether they qualify as child sexual abuse material.

“This is precisely the nightmare that we are all concerned about,” Jon Callas, a technologist at the Electronic Frontier Foundation, a digital civil liberties organization, told the NYT. “They’re going to scan my family album, and then I’m going to get into trouble.”

Google did not respond to the Daily Caller News Foundation’s request for comment.

All content created by the Daily Caller News Foundation, an independent and nonpartisan newswire service, is available without charge to any legitimate news publisher that can provide a large audience. All republished articles must include our logo, our reporter’s byline and their DCNF affiliation. For any questions about our guidelines or partnering with us, please contact licensing@dailycallernewsfoundation.org.