Customise Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorised as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyse the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customised advertisements based on the pages you visited previously and to analyse the effectiveness of the ad campaigns.

No cookies to display.

Evaluating Credibility on the Web With Annotation

By Nate Angell | 20 March, 2018

Pencil drawing of a check mark.The Credibility Coalition (CredCo), a collaboration including Hypothesis, just received significant new funding to support its work on a standardized framework anyone can use to generate and evaluate indicators about the credibility of online content like news.

CredCo’s new funding comes from major donors that seek to address the quality of information online, including Google News Lab, the Facebook Journalism Project, Craig Newmark Philanthropies, and others.

The goal of the coalition is to make it easy for communities, publishers, content platforms, and the general public to create and access consistent, contextual information about credibility to help make decisions about content they might consume, publish or share.

The framework harnesses annotation to link credibility indicators to specific pieces of online content — not only to entire documents or pages, but even to paragraphs, sentences, or fragments. Annotation is used throughout: in the process of creating credibility indicators, to display results directly to readers, and to publish discoverable, structured credibility data for wider use. Ultimately, readers could interact with content using tools that harness credibility data from sources they trust, like specific news sources, fact-checking services, or communities of experts.

Formal standards for web annotation published by the W3C last year enable multiple tools to play roles in the CredCo framework, including not only Hypothesis for the publication of results, but also Meedan’s Check for collaborative verification, and Public Editor (a system developed by the Berkeley Institute for Data Science and GoodlyLabs, powered by TextThresher) for crowdsourced content analysis.

Hypothesis celebrates the opportunity to continue collaborating to support standards-based, open annotation in this important work. “As we battle misinformation, we are creating many different systems that people and robots will use to check facts and classify statements. They all share a common pattern: reference to selections in web documents, and attachment of data to those selections,” as Jon Udell, Hypothesis Director of Integrations, put it in his earlier post. “The annotated web embodies that pattern. Systems that embrace it will tend to work well with one another. Their outputs will be available to mine, crosslink, and remix, and those activities will drive collective improvement.”

Share this article