While some education experts applaud the advancement of high-tech grammar tools as a way to help people more clearly express their thoughts, others aren’t so sure. Artificial intelligence, according to the contrarians, is only as smart as the humans who program it, and often just as biased.
“Language is part of your heritage and identity, and if you’re using a tool that is constantly telling you, ‘You’re wrong,’ that is not a good thing,” said Paulo Blikstein, associate professor of communications, media and learning technology design at Columbia University Teachers College. “There is not one mythical, monolithical (English) … And every time we have tried to curtail the evolution of a language, it has never gone well.”
Tech giants have long touted the significance of artificial intelligence, promising a sci-fi-like future where everything is controlled by all-knowing machines. Google, Apple and Amazon all have their own AI-assistants, which can answer questions, tell jokes, set timers and help with the shopping. Tesla’s electric vehicles can run on “Autopilot,” which can guide cars on highways. Doctors are using AI to help make diagnoses.
But the technology is also trickling down to more mundane tasks, often nearly invisible: That customer service agent you’re chatting with might be a bot, and your search results were likely influenced by what the tech giants know about you.
The increasing use of AI comes with some inherent risks, according to the people who study it. Sometimes it’s not able to understand or gets requests wrong.
That was made clear in a study by The Washington Post and researchers, which tested tens of thousands of voice commands given to Amazon Echo and Google Home devices and found notable disparities in how people from different parts of the U.S. are understood. (Amazon CEO Jeff Bezos owns The Post.)
When it comes to AI-corrected grammar, Google’s Gmail update underlines incorrect grammar with a blue squiggly line while the user composes an email. Clicking on the word in question reveals Google’s grammar suggestions. Microsoft introduced a machine learning-based editor pane to Office 365 users two years ago, while online tool Grammarly is a decade old, has over 20 million active users and recently rolled out a slew of new functions including grammar suggestions tailored to the tone of the piece the user is writing, according to the company.
Grammar editing tools aren’t new — just ask Microsoft’s much-maligned digital assistant Clippy — but the technology behind them is growing increasingly complex.
Grammarly said it uses a hybrid approach to build its algorithms, one that combines a variety of natural language processing methods, including machine learning, deep learning, and custom-made rules, among others. In a recent Medium post, Grammarly research scientists discussed using statistical patterns inherent in a language (“the” typically does not follow another “the,” for example) to distinguish between correct and incorrect language.
Grammarly CEO Brad Hoover said his company’s product was originally expected to be aimed at college students, and more than 1,000 educational institutions currently license Grammarly for their classrooms. “Our writing assistant is a coach, not a crutch,” Hoover said in an email.
To train its most recent grammar tools, Google said in a blog post that it asked computational and analytical linguists to review thousands of grammar samples over several months. Each sample was reviewed by three linguists, with the third linguist casting the tie-breaking analysis when the other two disagreed in 25 percent of the cases. The samples were then fed into statistical learning algorithms along with other “correct” text to build a spelling and grammar correction model. The models were then tweaked based on user feedback, such as early indications that some verb tense and singular or plural suggestions were incorrect.
It won’t be long until those email updates are being used by hordes of students and teachers. According to a ranking compiled by the educational technology company LearnPlatform, Gmail was the seventh most popular edtech product in the country, based on the compilation of data from millions of users.
The tools can allow people to learn while they write, said Michelle Navarre Cleary, the founder of DePaul University’s School for New Learning Writing Program and now the associate provost for learning at the nonprofit educational institution College Unbound. She said research has consistently shown that teaching grammar through sentence diagraming and memorizing parts of speech doesn’t necessarily leave students any better off.
“Saying that you have to crawl before you can run when it comes to writing — to name the parts of speech before you can start writing — is not only not true, it’s counterproductive,” said Navarre Cleary. “It shuts people down instead of getting them thinking.”
In the end, most kids are likely to get a mixture of AI-assisted and traditional methods of writing education.
Kathy and Jeff Locke are home schooling two of their kids this year in Silicon Valley-adjacent Alameda, Calif., in part to get them out from behind the school’s Google Chrome books and to reintroduce them to more classical education methods. Grammar class this year will be a mixture of techniques, including traditional sentence diagraming.
“We have what I think is a healthy fear of technology,” said Kathy Locke, a public school teacher. “In tackling things like writing and grammar, we want our kids to have a more classical model of teaching.”
But for her adult family literacy class of English-as-a-second-language students, the new grammar tools could be hugely helpful, she added, in addition to traditional methods. It’s “such a cool technology,” she added.