Advancements in technology make it easier for just about anyone to create convincingly falsified video. Moreover, people in today’s polarized political climate seem increasingly willing to believe what they want to believe — especially when it aligns with their political values and is shown in video. This potent combination of advancements in technology, the spread of social media and an impressionable population allows video misinformation to spread rapidly.
We want to help people navigate this perilous information landscape.
The Fact Checker, working with Nadine Ajaka, Elyse Samuels and other video counterparts at The Washington Post, set out to develop and identify a universal language to label and identify online video and hold creators and sharers of misleading video accountable — a taxonomy, so to speak. These labels can then be applied to videos when they appear, so viewers can immediately know that they should watch the video with caution.
The Post has created a special landing page to explain the terms, and the video above will give examples. We are eager to receive reader submissions of videos that might be eligible for labels.
We developed three broad categories, which we envisioned as branches of a tree that can grow to include other categories as purveyors of false videos get ever more creative. Some video is taken out of context; other content is deceptively edited or, in the worst instances, deliberately altered. Each category features two types of misleading or false video.
Regular readers might ask: Why not use the Pinocchio scale for videos? We saw a need for videos to have a separate verification system to complement Pinocchios. Video fact checks need the same quality and standards of text fact checks — but, more importantly, we need a way to classify these videos and determine the severity of manipulation. This is in line with how The Fact Checker has been contextualizing politicians’ statements but is adapted to the video space.
Obviously, if a politician uses video in a misleading way, he or she will be eligible for Pinocchios — but the video will also get its own label.
The first category is Missing Context. The video is unaltered, but the way it is presented to the viewer lacks or misstates the context in which events occurred. This is an increasingly common occurrence in our political discourse.
The two labels in this category are Misrepresentation and Isolation.
President Trump, for instance, shared on Twitter a video that allegedly showed people in Honduras getting cash from liberal billionaire George Soros or nongovernmental organizations to join a migrant caravan to the United States. That’s an example of Mispresentation — presenting unaltered video in an inaccurate manner. The video was shot in Guatemala, and there is no evidence the money in the video came from U.S. organizations.
Meanwhile, Sen. Kamala D. Harris (D-Calif.) shared on Twitter a clip of Supreme Court Justice Brett M. Kavanaugh during his confirmation hearings, claiming he uttered a “dog whistle for going after birth control.” This is an example of Isolation, sharing a brief clip from a longer video to create a false narrative that does not reflect the event as it occurred. The video snippet from his testimony did not make clear that Kavanaugh was quoting from the plaintiffs’ position in an contraceptive case, rather than offering his own opinion.
The second category is Deceptive Editing. Here, the video has been edited and rearranged. The labels in this category are Omission and Splicing.
Sen. Dianne Feinstein (D-Calif.) came under fire after the Sunrise Movement posted an edited video showing her speaking brusquely to schoolchildren about the Green New Deal. This is an example of Omission, editing out large portions from a video and presenting it as a complete narrative. The full video showed Feinstein more engaged with the children.
President Barack Obama’s 2012 campaign, in a campaign video called “The Road We’ve Traveled,” offered a misleading account of his mother’s insurance battles while suffering from cancer. This is an example of Splicing, editing together disparate videos to fundamentally alter the story that is being told. The film arranged clips from interviews with the president and his wife to suggest she was denied health insurance because her cancer was a preexisting condition, when in reality she had a problem with disability insurance.
The worst category is Malicious Transformation. That’s when part or all of the video has been manipulated to transform the footage. The labels in this category are Doctoring and Fabrication.
In November, White House press secretary Sarah Sanders shared a video that allegedly showed CNN reporter Jim Acosta act aggressively toward a White House intern. That is an example of Doctoring: altering the frames of a video — cropping, changing speed, using Photoshop, dubbing audio, or adding or deleting visual information — to deceive the viewer. This particular video included repeated frames that made Acosta’s arm movement look more exaggerated.
Fabrication refers to videos that use artificial intelligence to create high-quality fake images, simulate audio and convincingly swap out background images. Deepfakes, such as video putting words in Obama’s mouth, would fall in this category.
Our work on this project was in part funded by a grant from Google News Initiative/YouTube.
Send us facts to check by filling out this form
Sign up for The Fact Checker weekly newsletter
The Fact Checker is a verified signatory to the International Fact-Checking Network code of principles