Evaluating Credibility on the Web With Annotation

Evaluating Credibility on the Web With Annotation

By | 2018-03-20T18:03:53+00:00 March 20th, 2018|

Pencil drawing of a check mark.The Credibility Coalition (CredCo), a collaboration including Hypothesis, just received significant new funding to support its work on a standardized framework anyone can use to generate and evaluate indicators about the credibility of online content like news.

CredCo’s new funding comes from major donors that seek to address the quality of information online, including Google News Lab, the Facebook Journalism Project, Craig Newmark Philanthropies, and others.

The goal of the coalition is to make it easy for communities, publishers, content platforms, and the general public to create and access consistent, contextual information about credibility to help make decisions about content they might consume, publish or share.

The framework harnesses annotation to link credibility indicators to specific pieces of online content — not only to entire documents or pages, but even to paragraphs, sentences, or fragments. Annotation is used throughout: in the process of creating credibility indicators, to display results directly to readers, and to publish discoverable, structured credibility data for wider use. Ultimately, readers could interact with content using tools that harness credibility data from sources they trust, like specific news sources, fact-checking services, or communities of experts.

Formal standards for web annotation published by the W3C last year enable multiple tools to play roles in the CredCo framework, including not only Hypothesis for the publication of results, but also Meedan’s Check for collaborative verification, and Public Editor (a system developed by the Berkeley Institute for Data Science and GoodlyLabs, powered by TextThresher) for crowdsourced content analysis.

Hypothesis celebrates the opportunity to continue collaborating to support standards-based, open annotation in this important work. “As we battle misinformation, we are creating many different systems that people and robots will use to check facts and classify statements. They all share a common pattern: reference to selections in web documents, and attachment of data to those selections,” as Jon Udell, Hypothesis Director of Integrations, put it in his earlier post. “The annotated web embodies that pattern. Systems that embrace it will tend to work well with one another. Their outputs will be available to mine, crosslink, and remix, and those activities will drive collective improvement.”

Community, Privacy and Accessibility at Hypothesis