Archived alternative in case the original gets taken down:
https://archive.ph/tL6wk#selection-1153.27-1153.172
There’s a LOT to unpack here. Going to try and do this without getting this page flagged by someone/thing, so apologies for gratuitous censorship.
Mark noticed something amiss with his toddler. His son’s looked swollen and was hurting him. Mark, a stay-at-home dad in San Francisco, grabbed his Android smartphone and took photos to document the problem so he could track its progression.
OK this is completely reasonable imo. Everyone’s gone to the doctor with a rash or abscess or whatever, and one of the first questions you get is “has it changed at all since you first noticed it?”. As a relatively new father, my wife is constantly asking me “Does this red spot look bigger? Did he have this scratch before?” etc.
With help from the photos, the doctor diagnosed the issue and prescribed antibiotics, which quickly cleared it up.
Tada. OK, this all seems legitimate and above-board to me.
After setting up a Gmail account in the mid-aughts, Mark, who is in his 40s, came to rely heavily on Google. He synced appointments with his wife on Google Calendar. His Android smartphone camera backed up his photos and videos to the Google cloud. He even had a phone plan with Google Fi.
Also feels like a pretty reasonable place to be for most people. Everyone wants an integrated ecosystem because it’s convenient and/or required. There are really only three options (if you count Microscoft) and so you have to choose. Even at this stage in my life, I’m still using a Google account to access a couple paid applications on the Play store, and for academic software Google groups where that’s the only option.
Two days after taking the photos of his son, Mark’s phone made a blooping notification noise: His account had been disabled because of “harmful content” that was “a severe violation of Google’s policies and might be illegal.” A “learn more” link led to a list of possible reasons, including “child sexual abuse & exploitation.”
OK, so this is where the story gets a little confusing to me - I’ll spare the confusion here. At the bottom, it says he was using Google Photos, and this is where the scanning for CSAM occurs. Perhaps this isn’t news, although it’s new to me. A few years ago I moved everything off Google Photos to NextCloud but there are some killer features for Google Photos (actually good OCR, ability to text search, and good sorting) and my family all still use Google Photos. It’s a huge selling point of Android.
OK this next sentence everyone already knows is coming:
A few days after Mark filed the appeal, Google responded that it would not reinstate the account, with no further explanation.
…but what I didn’t expect:
Mark didn’t know it, but Google’s review team had also flagged a video he made and the San Francisco Police Department had already started to investigate him.
OK so how does this end? The police find nothing, Google won’t reinstate his account (but he stops trying, understandable). There’s a few details though, that are pretty interesting and I don’t want to get missed:
When Mark’s and Cassio’s photos were automatically uploaded from their phones to Google’s servers, this technology flagged them.
This has been the entire sale this whole time and discussion about CSAM. No human intervention is necessary (so no-one is looking at your photos or reading you messages) but the AI can still detect new, never-before-seen child porn. It’s a win win because think of the children, and also because it doesn’t invade your privacy since no human is involved in flagging it.
Except…
A human content moderator for Google would have reviewed the photos after they were flagged by the artificial intelligence to confirm they met the federal definition of child sexual abuse material.
So, to recap up to this point:
- Father takes images of child’s genitalia to track infection/send to doctor
- Google AI flags it
- Google employee views non-sexual, medical images of a couple’s naked child without explicit permission
- Google employee decides it is sexual, and forwards it to the police (indirectly, apparently)
Am I just so blown away because I never really internalized what this scanning means re: false positives? Is it because I’m a relatively new father? I always thought the scanning was a bad idea, but in my head, it went something like:
- Upload image of CSAM
- AI detects and flags. False positives. Maybe law enforcement. Automatic account deletion.
Somehow it didn’t occur to me that there’s a Google employee looking at photos of people’s naked children, and/or medical information. Now, Google isn’t a healthcare provider so I’m not sure HIPAA applies, but what I don’t understand is how on earth a Google employee looking at a legal photo of a naked child that isn’t theirs, is not illegal? How is that not distributing CSAM? Is Google magically exempt from the laws surrounding this? At the very least, this is a gross, abso-fucking-lutely insane invasion of privacy. Google is literally taking other people’s private photos, and distributing them to their employees.
One more little tidbit:
In December 2021, Mark received a manila envelope in the mail from the San Francisco Police Department. It contained a letter informing him that he had been investigated as well as copies of the search warrants served on Google and his internet service provider. An investigator, whose contact information was provided, had asked for everything in Mark’s Google account: his internet searches, his location history, his messages and any document, photo and video he’d stored with the company.
The search, related to “child exploitation videos,” had taken place in February, within a week of his taking the photos of his son.
Hope he didn’t have anything else to hide.