Her husband had dozed off by her side, but Jan Buckner Walker could not sleep. Emotionally drained after watching the trial of the police officer accused of murdering George Floyd, she fretted about what the guilty verdict would mean for the Black men in her life. Well past midnight, the corporate attorney turned crossword creator reached for her phone and opened “Typeshift,” a word game that carried the Merriam-Webster brand.

She flicked her thumb across the glowing screen, dialing letters up and down to find all the possible word combinations in that day’s puzzle. Within minutes, the grid of mostly brown squares turned turquoise, accompanied by a click signaling success.

Seven white letters had appeared: “Lynched.”

“It was unbelievable,” said Buckner Walker, founder and president of Kids Across Parents Down, a Chicago-area company that creates crossword puzzles. “I thought, ‘Are they trolling African Americans?’ Why ‘lynched’?”

As Floyd’s death trained attention on systemic racial injustice, society is struggling anew to agree on common standards for language that once seemed broadly acceptable but now is considered off-limits by many. Those debates, which have resulted in the renaming of sports teams and popular food brands, are showing up in surprising places, with dictionary companies, crossword constructors and competitive Scrabble players forced to examine how a single word, devoid of context, can offend and marginalize.

“Words either connect or divide us,” said Buckner Walker, 58, who has repeatedly alerted Merriam-Webster to racist insults and other hurtful terms appearing in “Typeshift.” “When our dictionary infuses a pastime that promises joy with toxic language instead, it is retrogressive and powerfully divisive.”

In Merriam-Webster’s case, an effort by the 193-year-old dictionary brand to extend its audience reach through mobile gaming collided with its traditional role of registering terms and defining their meaning without passing judgment.

“We are dedicated descriptivists, not prescriptivists,” said Peter Sokolowski, a Merriam-Webster lexicographer and editor. “We’re not the language police telling you, ‘Yes, you can. No, you can’t.’ We are simply reporting the facts on how language is used.”

Now, however, facing the unfamiliar responsibility of not offending players, Merriam-Webster has also become the arbiter of which words to block from online puzzles. And as other purveyors of word games have discovered, decisions about whether to exclude racial slurs and other offensive terms — including those that might have benign meanings in certain contexts — risk alienating some players and raise challenging questions about power and perspective.

When the North American Scrabble Players Association banned slurs from the word list used in competition in summer 2020, infuriated aficionados argued that words played in the game lack meaning and intent.

“When you start taking words out, where do you draw the line?” said Jason Idalski, an advisory board member of the Scrabble association from Ferndale, Mich., who voted against the ban. “No matter where you draw the line, people will say, ‘Why is this word in but not that? Are you inherently saying it’s okay to offend these people and not those people?’”

The standards for word-based games continue to evolve.

Hasbro Gaming, the company that owns the rights to Scrabble in North America, clarified official rules last summer to also bar derogatory terms. Hasbro spokeswoman Carrie Ratner said the company “will not accept having slurs of any kind in entities created and supported by our company” and noted that Hasbro had begun removing offensive words from “The Official Scrabble Players Dictionary,” published by Merriam-Webster, in 1994, scrubbing more words with each edition. (Tournaments follow a different lexicon.)

Mattel, which owns Scrabble in the rest of the world, followed suit this year, removing what the company called “hate speech” from its list of playable words. “As language and connotations change over time, so must Scrabble,” Ray Adler, global head of games at Mattel, wrote in a company blog.

The New York Times crossword began running puzzle clues and answers past a diversity panel this year after controversies involving the use of words such as “beaner” and “illegal,” which are racial slurs that also have innocuous meanings.

“There is a sense that there should be more care taken with choices of words that could be potentially offensive, whereas before they were just, ‘oh, words in the dictionary, so it’s okay,’” said linguist Ben Zimmer, who leads the New Words Committee of the American Dialect Society.

A Merriam-Webster representative alerted The Washington Post to degrading terms appearing in Word Wipe, an online game developed by Arkadium that is published on The Post website and in other outlets. Arkadium said it regularly audits and culls the words that could appear in its games and has taken steps to remove additional terms. “We never want anything to appear that could be offensive, even if the word had a benign origin,” Arkadium chief executive Jessica Rovello said in a statement.

Many word puzzle makers already adhere to what’s informally known as the “breakfast table test.” Words that elicit feelings of discomfort if someone is playing a puzzle with their morning cereal — such as “cancer” — would be verboten even though they may also have harmless meanings.

Yet Buckner Walker said Merriam-Webster officials were slow to rid “Typeshift” of demeaning terms even after she alerted company executives in December to the appearance of the slurs “Negress” and “darkies.” (The computer-generated anagram game challenges players to form as many words as possible using the letter combinations dealt each day. Players who click on the found words are directed to the Merriam-Webster definitions.)

Damien Yambo, the director of educational products, apologized and told her that independent game developer Zach Gage, who controls the “Typeshift” game where the slurs appeared, should have excluded those words.

Merriam-Webster licenses its dictionary to Gage in exchange for a daily game that runs on the company website, which company officials said should filter all words the dictionary labels as “offensive,” “vulgar” or “obscene.” The game is also available on the “Typeshift” mobile app, which was prominently branded with Merriam-Webster’s red, white and blue logo, but the dictionary company said it has no oversight over that version of the game.

Gage said he added both slurs to the list of words to block from the game shortly after Yambo contacted him. “Nobody playing ‘Typeshift’ should be experiencing a word that is traumatic for them,” Gage said.

But after former police officer Derek Chauvin’s murder conviction in April, “lynched” popped up. This time, Buckner Walker contacted Merriam-Webster president Gregory Barlow about the “repeated and painful delivery of racist language” through the game app, according to an April 27 email shared with The Post.

Barlow responded within hours, followed by two phone calls. “We took the matter very seriously,” he told The Post in a written statement. “I made it clear that it was deeply concerning to me, and promised to take action with the third-party owner of the game.” Barlow said the company demanded that Gage remove the Merriam-Webster logo from the “Typeshift” app.

Even after Buckner Walker spoke to Barlow, disparaging terms showed up on the website version of the game that the company does control. The transphobic slur “tranny” appeared on May 19. The racial slur “coons” appeared on Sept. 13.

Players on the mobile app, meanwhile, received credit for finding “slave” on July 19, the antisemitic term “hebes” on Aug. 31 and “bitches” on Sept. 24, among other racist, misogynistic and homophobic words observed by The Post in screenshots and in real time.

Merriam-Webster removed “Typeshift” from its website on Sept. 24, three days after The Post contacted the company for this report, to update its block list and establish a process to be more responsive to cultural norms and current events.

Gage removed the Merriam-Webster logo from the “Typeshift” app that same day. “I firmly believe in not exposing people to offensive words, especially racial, gendered or sexual slurs,” said Gage, a 36-year-old solo developer whose licensing agreement with Merriam-Webster began in 2017. “Any offensive words that appeared in ‘Typeshift’ puzzles were there in error, and when I was informed of them I updated the block list and generated new puzzles.”

Company officials said Merriam-Webster plans to seek outside input on expanding its list of blocked words because it is unaccustomed to casting editorial judgment beyond what its dictionary has flagged for offensive usage.

Merriam-Webster’s lexicographers add hundreds of new words and definitions to the dictionary each year as language evolves. In 2020, it included “BIPOC” and flagged the word “uppity” as especially offensive when applied to Black people. This year, it added “cancel culture” and “digital blackface” and noted the widely accepted capitalization for “Black.”

Human vetting of computer-generated online word puzzles is standard for big gaming companies, said Gillian Smith, a computer science professor and director of the interactive media and game development program at Worcester Polytechnic Institute in Massachusetts.

Smith said no puzzle creator would have wanted to release a word game with “Afghan” or “Taliban” as answers after the August suicide bombing attack in Kabul that killed 13 U.S. service members and dozens of Afghans. “You might even pause at including a word like 'Marine’ because it’s so contextualized,” Smith said.

Zynga, which makes “Words with Friends,” a popular Scrabble-like game, has a policy of eliminating racial slurs and other objectionable words that players flag, said spokeswoman Sarah Ellen Ross. The company employs a team responsible for ensuring that words allowed in the game are inclusive, Ross said, but some players have objected to what they view as the arbitrary nature of determining which words are inappropriate.

In the Scrabble world, players expressed confusion and outrage over the list of 259 words characterized as slurs to be banned from competitive play. “Redneck” was blocked. “Redskin” was not, because it also carries a benign definition as a variety of peanut. “Faggot,” despite being a homophobic slur, remains eligible for play because of its archaic meaning of “a bundle of sticks.” Players were also mystified by the exclusion of “Pepsi,” which is used in Canada as an insult against French-speaking Quebecois.

John Chew, chief executive of the Scrabble association, which is the governing body for competitive Scrabble in the United States and Canada, and chairman of its dictionary committee, said he expects the organization to revisit the list.

“Words like the n-word, they’re not coming back,” Chew said. “We’d all been indoctrinated into this idea that words in Scrabble have no meaning. But there’s nothing special about putting a word on a board that divorces it from the context of the rest of the world around us.”

Yet clashes are occurring more frequently around other terms as the arbiters of language are forced to consider what words mean outside their own circles.

“There are inherent difficulties because people are just not agreeing on what words mean and their level of offensiveness,” said Nicole Holliday, a linguistics professor at the University of Pennsylvania who hosts Slate’s “Spectacular Vernacular” podcast with Zimmer.

Holliday, a member of the American Dialect Society, recalled debate over one finalist for the 2020 slang word of the year: “WAP.” While some society members worried that the term made famous by rappers Cardi B and Megan Thee Stallion was too sexually explicit, older members nixed it because the pronunciation sounded like “wop,” a slur against Italians.

The New York Times was skewered in 2019 for including the word “beaner” in its crossword puzzle. While the clue referred to an informal baseball term, critics pointed out that the solution is also a slur against Mexicans. Crossword puzzle editor Will Shortz apologized, but his defense further angered some.

“My feeling, rightly or wrongly, is that any benign meaning of a word is fair game for a crossword. This is an issue that comes up occasionally with entries like GO O.K. (which we clued last April as ‘Proceed all right,’ but which as a solid word is a slur), CHINK (benign in the sense as a chink in one’s armor), etc. These are legitimate words,” Shortz tweeted. “Perhaps I need to rethink this opinion, if enough solvers are bothered.”

Shortz later acknowledged that “beaner” is offensive and that it was a mistake to have included that answer. He told The Post in a recent interview about the new standard: “If you see the word out of context, does the word feel benign to the average person?”

Still, Shortz said, there is “nothing inherently offensive” about the word “lynch” despite its painful associations. He said he would clue it as “former attorney general Loretta,” whose last name is Lynch. “It would be a shame to say you can’t use this word,” he said.

Buckner Walker said she thinks the problems often arise from the lack of diversity among puzzle creators who do not experience the world in the same way she does.

“There are triggering events in our lives, and if you abide by the decency of what the breakfast table test demands of you, you can easily avoid traumatizing and offending your consumer base,” she said of “Typeshift” including “lynched.”

That word — along with “lynches” — appeared again in the Sept. 26 puzzle for players who had not updated the game app. Those who did download the update received a different set of letters, including those that formed the word “excuses.”

Ten days later, on Oct. 6, the racial slurs “chinks” and “chinky” appeared on the updated app.