Others have reported similar errors.
Tfw your finalizing a piece on E. Europe post-socialist parties in Google Drive and Google removes it because it's in violation of its ToS??— Bhaskar Sunkara (@sunraysunray) October 31, 2017
In response to some of these reports, a Google employee tweeted that the team handling Google Docs was looking into the matter. Later Tuesday, Google said in a statement that it had "made a code push that incorrectly flagged a small percentage of Google Docs as abusive, which caused those documents to be automatically blocked. A fix is in place and all users should have full access to their docs."
Although the error appeared to be a technical glitch, the fact that Google is capable of identifying "bad" Google Docs at all is a reminder: Much of what you upload, receive or type to Google is monitored. While many people may be aware that Gmail scans your emails — for instance, so that its smart-reply feature can figure out what responses to suggest — this policy extends to other Google products, too.
"We collect information about the services that you use and how you use them, like when you watch a video on YouTube, visit a website that uses our advertising services, or view and interact with our ads and content," it says.
What does it mean when Google says "collect information"? This page says more:
"This includes information like your usage data and preferences, Gmail messages, G+ profile, photos, videos, browsing history, map searches, docs, or other Google-hosted content. Our automated systems analyze this information as it is sent and received and when it is stored."
Google explicitly refers to docs — albeit in a lower-case fashion — as an example of the type of content from which Google extracts information. I've asked Google for clarification on whether they actually read the contents of a person's Google Docs and will update if I get a response.
"This kind of monitoring is creepy," Bale tweeted.
Update: On Tuesday afternoon, Google said that it does not technically read files, but instead uses an automated system of pattern matching to scan for indicators of abuse. Though it can identify clusters of data that might suggest a violation, the system does not pull meaning from the content, according to a company spokesperson.