UX Collective

We believe designers are thinkers as much as they are makers. https://linktr.ee/uxc

Follow publication

How to UX the F*** out of Fake News

We’ve all heard the terms “Fake News” and “Alternative Facts” so many times it’s annoying. That being said, it’s a problem that needs to be addressed.

In this post I’m going to give you a few design techniques you can put in your UX toolbox to design experiences that squelch fake content.

But first, let me quickly illustrate what we’re talking about

Just yesterday (March 29, 2017), it was revealed that The Kremlin employed 1,000+ people to create fake stories targeting key swing states in the 2016 U.S. Election.

There were upwards of 1000 internet trolls working out of a facility in Russia, in effect taking over a series of computers which are then called botnets, that can then generate news down to specific areas. ~ U.S. Senator Mark Warner

Fake news is often intentionally created with a predisposed bias, then is shared within target demographics (who create an echo chamber around it), and sometimes it is even bolstered by likes and comments from networks of bots (like in the 2016 election example above).

Most fake content exists to influence the opinions of you and me.

Most fake content is political and intended to influence opinions

Fake News on Facebook

Some is just satirical content mistaken as real that goes viral

All of these stories were fake/prank videos that were reported widely on and shared/liked countless times

Other types attempt to influence on much more serious matters

The Sandy Hook conspiracy, flat-earth theory, and anti-vaxxing are all movements that started and grew based on false, yet persuasive content

2017 is the year designers start caring about the content that their users create and share on their platforms.

It’s a big year for us. As product creators, we’ve traditionally exempted ourselves from responsibility when it comes to content in our products.

How many times have you heard a designer or a developer say: “We’re just creating the platform. Users provide the content.”

Not anymore. When it comes to content that misleads the masses, inspires hate, and fails to inform correctly… We’re accountable.

Product designers are on the hook, not to censor or bias content, but to provide a mechanism that informs their users of the reliability of the content being disseminated in their products.

All credit goes to Onion.com for this masterful piece of Fake News

We need a UX framework to prevent fake content

Every product that relies on content sourced from its users must build into their design features a mechanism to actively hinder fake content.

Calling people out on their content is really uncomfortable. But it’s of tantamount importance that we do so.

There are disputed facts. But there are also truths and lies. Amidst our culture of political correctness, we must acknowledge this.

News outlets are already doing this. Products with user-generated content must follow suit.

We must design our products to support firm truths, counter firm lies, and enable dialogue about disputed facts.

The product features we design to counter fake content must be stern, yet gracious — corrective, yet humble — persuasive, yet unbiased — informative and bullet-proof.

This fake content prevention framework must never censor content. It must never prevent the flow of information (even false information).

Knowledge is power, censorship is not.

The framework should:

  1. Provide catches for the posting of fake content
  2. Provide visual cues if something is fake content
  3. Hamper the viral-ity of fake content
  4. Prevent the dissemination of fake content
  5. Present unbiased information explaining why a certain piece of content, or a certain poster, may be using unreliable facts
  6. Hold people accountable to their actions

The framework must do all these things with grace. It must honestly, without bias or agenda, expose lies and reveal the truth.

Presenting the Fake Content Prevention Framework

I’ve designed such a framework, a UX guide for firmly but respectfully fighting and hindering the dissemination of fake content.

Understanding the Fake Content Prevention Framework

There are four main components that make up the skeletal structure of a “Fake Content Prevention” mechanism:

  1. Validation Engine: the central brain that measures reliability and catches trending, unreliable content. This is your product’s fact checking mechanism.
  2. Citation Experience: a screen or series of screens that unpack the reliability of disputed content, cite sources, present real and true facts, and make a valid argument against unreliable content.
  3. Reliability Cues: these are design elements and features that communicate to users the reliability of content or a poster.
  4. Private Feedback Mechanisms: ways for users to dialogue privately with other users or with you (the product creator).

1) The Validation Engine

The backbone of fake news prevention is the Validation Engine. It is the mechanism by which you catch blatantly false content and by which you measure a post, and a poster’s, reliability.

Important: the Validation Engine does not:

  • Censor content
  • Have an “angle” or bias
  • Ever stop learning

The Validation Engine has 2 critical duties:

  1. All content goes through the “Validation Gateway”. Anything that is known to be blatantly false or disputed is flared as such, and a link to the Citation Experience (see below) is provided to explain why the flare was attached. No content is ever blocked, only flared. Privacy is important here to prevent brigading and bullying.
  2. All posted content is actively monitored by the Validation Engine. If any new information comes out that proves a shared news story to be false or disputed over time it is flared as such, and a link to the Citation Experience (see below) is provided to explain why the flare was attached. I repeat: No content is ever blocked, only flared.

Facebook’s new disputed content feature flow:

The Validation Engine is built by combining these 4 things:

  1. Independent third party review: use an external agency or group skilled in investigative journalism that has a library of resources to monitor trending content, review it and counter common myths, etc. (See International Fact Checking Network, Snopes.com, Les Decodeurs by Le Monde (proprietary), CrossCheck, FactCheck.org)
  2. In-house investigative journalism: use your own talent, or hire new talent, to monitor the content that is trending in your product, review and counter common myths, etc.
  3. Machine Learning, A.I. & Automation: Humans have always been the gold standard when it comes to fact checking, it’s one thing we still can hang over the robots’ heads. But machine learning is quickly catching up and must be used to help humans narrow the scope of their investigative and publishing responsibilities in the Validation Engine. Programmatically pull from existing reliable news sources, factual research databases, etc. to build a case against common myths and to monitor trending content and catch unreliable content. Note that this solution by itself is not enough [yet] because A.I. still has a hard time inherently fact checking, but machine learning and automation can be heavily used to help optimize Validation Engine processes. (See “Can Machine Learning Detect Fake News?”, “Is Fake News a Machine Learning Problem?”, Politifact API, Reuters News Tracer (proprietary algorithm), Flock Fake News Detector (propietary algorithm), Automatic Deception Detection Methods (2015))
  4. Peer Reporting: allow community users to report a post that they believe to be unreliable. This method will help to reveal some unreliable content, but users will also tend to use this method to report content they disagree with, regardless of it’s reliability. Also, your peers are commonly people with similar views to you, so they’re often not great critiques (check out the echo chamber effect). Consequently: peer reporting cannot be used as the sole source of validation. It can only be used in addition to the other methods.
Snapchat has a rigorous review and approval process that “Discover” publishers must adhere to

Avoid polarizing terms when your validation engine flares content.

  • Don’t label content “false” or “fake.” Terminology like this risks only further entrenching readers who may actually agree with the inaccurate content, and may make the user that originally posted the content feel attacked.
  • Use neutral terminology like “disputed” or metered visualizations that illustrate confidence in the content’s legitimacy.
  • As designers, we must also seriously consider our use of color, our use of icons, and the weight and tone they each communicate (e.g. red=serious problem, orange=hard warning, yellow=softer warning, green/blue=pleasant notification).
For example, Facebook has chosen to use the term “disputed” with a red tag and warning icon in their content-monitoring

What if you can’t build enough of a case to prove a story is Fake News?

Don’t touch it. I repeat: if your validation engine is unsure, don’t touch it. There is such a thing as degree of certainty, only flare content as disputed if your validation engine has a high degree of certainty that it is faulty.

The goal of your Validation Engine is to inform about common myths and slow down potentially-viral Fake News stories.

Note: it’s not possible, and not even necessary, to catch all fake content postings. It’s better to have some fake content slip through the cracks than to have true content censored and the reliability of your product put at risk.

2) The Citation Experience

If a piece of content is flared as disputed, it’s important to let users know why it’s unreliable. The citation experience allows curious users to dig into the sources and reasoning backing up the flare. It’s a critical component to building reliability into your product’s validation system and informing your users.

You citation experience should include the following things:

  • Proof: provide an answer to why is disputed content. Proof can include a custom explanation but must always include sources from neutral news outlets, research, or studies.
  • [Humble] Education: help users do better next time. Fake News has become widespread because it preys on emotion, not intellectual sense. Education could be in the form of tips and tricks for checking sources online, explanation of common logical fallacies (like this one: correlation ≠ causation), links to common fact-checking websites, etc. Your citation experience should help to build a mutual understanding of why reliable information is important, why everyone is accountable to their own content, and how each user can improve their online honesty, integrity, and research techniques.
  • Recourse: provide an outlet for users to reach your customer service, dispute the flare, or privately message the poster directly.
  • About reliability scoring: Explain how your product determines disputed posts. Be transparent and thorough so that you inspire trust in your user.
Facebook’s “Citation Experience”. It’s pretty generic and could use some work in my opinion.

3) The Reliability Cues

Integrate reliability visually into your product by tying it to user profiles and published content. These are publicly visible cues that users can come to trust to inform them of the reliability of the content they are viewing.

Give all users a “Reliability Rating”

A rating based entirely on the factual accuracy of their past posts. This reliability rating is public, and thus creates an environment where people fight for the value of their reputation and start to feel a sense of ownership over the content they disseminate.

Online bullying and soap-boxing are often side effects of people being able to hide behind their online profiles.

Online profiles often have no mechanism of accountability.

A Reliability Rating begins to help people understand the consequence of their actions online.

Our culture really hates labeling people (often for good reason), but we’ve reached a point where we need to be okay calling a liar a liar when it comes to objective facts. Labeling someone as a “reliable” or “unreliable” source may be a scary thought at first, but if it’s purely based in how factual their words are, should it be?

“Visual design and User Experience can be used as a powerful force to give people quick indications of the quality of what they’re reading and sharing.” ~ Jeremy Johnson

Give posts a “Reliability Rating”

For false content, this is the “disputed” flare we talked about previously. You could go so far as providing a reliability rating (or flare) for proven-to-be-true content, or unproven-but-not-disproven content. The goal here is to accurately inform users about each piece of content they absorb and to gain trust.

Credit goes to Jeremy Johnson for these mockups

Flock.co has this feature in their team messaging app. They call their validation engine the “Fake News Detector,” and it works by “cross-referencing the URLs of links shared on Flock against a database of more than 600 verified fake news sources. Any fake news is immediately flagged with a highly visible icon and red bar alongside the preview of the URL. Using this tool, Flock users can easily identify fake news and refrain from sharing such content.”

The Fake News Detector (FND) flags unreliable content when shared on Flock (flock.co)

To take this one feature step further, completely unreliable content can be collapsed by default, to hamper users from knee-jerk liking as they scroll down the page and to prevent snowballing viral-ity that is so-common with provocative fake content. Collapsed content takes a conscious effort to expand and acts as a gateway that alerts the user that they’re viewing blatantly false content.

4) The Private Feedback Mechanisms

It’s important to enable private dialogue between individuals. Conversation and debate are healthy ways to build understanding. Privacy is important here to prevent brigading and bullying.

Instead of only providing an option for a community member to “report inaccurate content”, present users with the opportunity to directly engage the offending user in dialogue via private messaging.

Facebook allows users to direct message each other about Fake Content

Additionally, allow content-posting users to file complaints with your validation engine and product support team if they feel like their content has been unjustly flared or disputed. Give posters the opportunity to argue their case.

We’ve become a consumption based culture that has a hard time identifying with “the other” (whatever that “other” might be), and encouraging engagement and meaningful dialogue is one step back in the right direction towards building common ground and inspiring empathy in each other.

Ultimately, you can’t completely stop fake content

But it can dissuade it and inform busy users. As product owners, that’s really as far as we can take it at this point in time. You could even argue that’s the full extent of what we’re responsible for, and I may just agree with you.

Here’s the takeaway: your product doesn’t have to stop all fake content. You’ll die of anxiety before you can even get close to achieving this.

But you can’t not try.

Preventing a few of the worst posts from going viral is better than not catching anything at all. And from there you can only get better at it.

Designers today must take some level of accountability for the content being disseminated in their products.

“Even if it’s not your fault. It is your responsibility.”

~Terry Pratchett, Author

You’re responsible for walking the tightrope. Uphold freedom of speech. Avoid bias. Inform users. Don’t let lies fly unchecked.

Sources, Contributions, and Image Credits

I am a UX guy who works at Universal Mind — A Digital Experience Firm. Follow me on Twitter at @joesalowitz or visit my website at joesalowitz.com

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Responses (2)

Write a response

How is this framework going to filter out fake news stories like the recent Chemical Attacks in Syria? Ever since Vietnam the USA military-industrial complex has taken control of any war-news, ensuring they control the narrative and don’t lose…

--