‘Contextualization Engines’ can fight misinformation without censorship

By Aviv Ovadya

Imagine that you were forwarded a terrifying message in a group chat. Or saw a post shared on Facebook which made you furious at some news organization. But something seems a tiny bit fishy…

Option A: Without a contextualization engine

While you would like to know if the claims are really true — and you may “want” to look it up…you just don’t have time for that sort of thing. It’s easier to just go with the flow. It’s also a giant pain to copy and paste things or type out many search terms trying to figure out if someone else is just confused — especially on a phone. So you don’t check.

Option B: With a (very basic) contextualization engine

You see something that looks fishy — and tap a button to ‘contextify’ it…

  1. If it finds no close enough matches, it warns the user and potentially identifies the most likely relevant keywords that the user can run a more traditional search with if they would like (with another tap).
  2. It adds the media object to a triage queue for relevant organizations to potentially evaluate (e.g. fact-checkers).
This magnifying glass feature on WhatsApp was a valuable step forward. But it doesn’t currently work in practice in many cases. It makes it easier to look up messages on Google, but keyword search doesn’t work for long messages, images, videos, audio, with data voids, etc. We need more tools designed for contextualization.

Why even the basic contextualization engine helps

Key Insights: Unlike a Google keyword search, ‘contextifying’ does several crucial things:

  • Focuses on authoritative sources — likely initially using whitelist certification through recognized 3rd parties such as the International Fact-Checking Network (IFCN), First Draft, News Guard, standards organizations, etc.
  • Warns about data voidslets the user know if the system can’t find good information on the topic.
  • Supports the people doing deeper investigations — provides the human fact-checkers and other organizations with information about what is important to explore — and potentially revenue from web traffic in ways that are directly aligned with the users’ goals.

Contextualization systems can be even more helpful

This is just the beginning of the potential for contextualization engines and interfaces. A contextualization system might also support the remainder of the SIFT method:

  • Investigate the source (SIFT): If the contextualization system already has information on why a source might be considered authoritative, it can provide that information to the user — showing why they might trust it (e.g. this source is certified by IFCN).
  • Find better coverage (SIFT): Building on the ‘analyze’ component described earlier, a more fully featured contextualization engine would not only auto-generate audio and video transcripts from media, but also automatically interpret any imagery and captions in order to better understand the content and find contextually relevant sources.
  • Trace claims, quotes, and media to the original context (SIFT): Finally, the contextualization engine can do the tracing for the user. It can essentially scour the web for the original context of any content.

The potential — and risks — of artificial intelligence advances

While recent advances in artificial intelligence make a ‘contextify button’ possible, imminent advances will also make contextualization systems critically important to address threats to democracy and financial systems. Deepfake videos, incredibly effective AI-optimized phishing attacks, and automated troll armies may become pervasive — and indistinguishable from the real thing by an ordinary person.

How can we make this happen?

A “contextify button” to push media to a contextualization engine could be built into everything — just a normal and expected part of the interfaces for viewing and sharing content.