Theoretically, the ability to peruse and share huge amounts of information around the world instantly is a good thing. All that knowledge, at your fingertips. It's the sort of thing that could — theoretically — bolster the democratic process, improving awareness of issues and allowing questions about the positions of the candidates to be answered at a moment's notice.

As it turns out, once you overlay two flawed social networks onto that system, the result is largely the opposite.

In April, I stumbled across a site called Prntly. Prntly was the source of a then-popular rumor about Ted Cruz's campaign manager pushing an unflattering attack on Melania Trump, a rumor that was false. Donald Trump himself had tweeted a link to another incorrect story from Prntly; a quick glance at the site revealed that it was nothing but hyperpartisan nonsense, anchored squarely in the center of a bunch of low-rent ads.

The game Prntly was playing turns out to be a popular one. On Thursday, BuzzFeed reported on a cluster of pro-Trump sites based in Macedonia, all doing the same thing: taking rumors from conservative media, inflating them 200 percent, slapping an enticing headline on top and — crucially — posting them to Facebook. Four of the five most-shared stories from these sites were totally false, according to BuzzFeed's reporters, but each had been shared millions of times.

It's not a coincidence that these sites are supportive of Trump. The candidate himself is a whirlwind of inaccuracy and falsehoods, reinforcing in his supporters a dual sense that the media pointing out his lies is an example of bias and a broad acceptance of pro-Trump, anti-Hillary Clinton gibberish. Those sentiments predate Trump as a candidate — they're underlying principles for other more established conservative media outlets — but Trump's candidacy and his embrace of total inaccuracies has created a hyperactive economy of nonsense.

It's also not a coincidence that they thrive on Facebook. Facebook has proved to be a very effective tool for sharing authoritative-looking but incorrect articles from the Web. The Wall Street Journal created an interactive to show how partisanship shifts the lens of what news people see. A lot of that is self-selecting, allowing people to form small, mutually reinforcing communities passing around articles that reinforce their beliefs.

Facebook has tried, at times, to tweak its algorithm to hide news from less-reputable sources, but they always seem to leak back in. Facebook's audience is giant, a big slice of the population of the world, and there's a lot of money to be made leveraging that giant, connected audience to drive people back to garbage sites to see junky ads. Especially in a place like Macedonia, where the U.S. dollar goes a lot further.

But it's not only the hustlers who are the problem. Facebook has also stumbled in trying to fix its own bug. When news reports broke about Facebook's human curators burying news articles from conservative outlets, the social networking company switched to a purely automatic algorithm to isolate what was trending.

The problem is that it quickly became obvious that what trended was fake political news. The Post's Abby Ohlheiser noted that what was trending post-human intervention was a lot of nonsense. For example: A story about how “traitor” Megyn Kelly was fired from Fox News for supporting Hillary Clinton. Nothing about that is true, but it spread quickly through the well-greased conduits of political Facebook. Over subsequent days, other stories ran the same course.

It's hard to separate out how much of this bad-information sharing is Facebook's fault. Its audience makes it a key mechanism for sharing misinformation on the Web, but that can also be seen as arguing that superhighways are bad because they allow criminals to move between states easily. In this example, though, it's a bit like having criminals driving around in cars painted with “I AM A CRIMINAL” in big, block letters, and the Facebook Patrol having little luck blocking their progress.

As an information-sharing service, Facebook is flawed. As a social network, Twitter may be worse.

For a few months, rumors have spread that Twitter is up for sale. A number of different companies were said to have expressed interest, including Salesforce and Disney. That purported interest eventually faded, and Twitter remains unsold.

Why? One theory: Twitter's culture of abuse.

Over the course of the hyperactive, aggressive 2016 election cycle, Twitter's inability to eliminate racist and anti-Semitic trolls has been a recurring problem. A subculture of Trump supporters, often referred to as “alt-right” in shorthand, has evolved the Gamergate behavioral and political ideologies into a weapon in support of their candidate.

Women and minority users of the service have long complained about the way in which new accounts could be quickly created and issue a stream of abusive comments before Twitter can do anything about it. The campaign has simply broadened the targets of that abuse. Jewish Twitter users, including Jewish journalists, have anecdotally reported seeing increased abuse and harassment. Members of the press more broadly have similarly seen an increase in abusive behavior, certainly thanks in part to Trump's willingness to depict members of the media as his enemies.

The damage from these flaws in Facebook and Twitter is hard to calculate. Twitter's problems are more isolated and, admittedly, disproportionately affect members of the media like myself. Facebook's bad information problem is more pervasive and more serious, fostering at least a culture of uncertainty about the truth of basic facts and, possibly, creating an entire world of belief founded on falsehood.

Maybe we're at a moment in which an unusual candidacy is overlapping in a negative way with flaws in still-young Internet services. Or maybe the idea that the Internet would foster comity and broader knowledge on important issues was itself always flawed. Maybe these problems aren't fixable.