Annotating All Knowledge, FAIRly
Join the conversation connecting FAIR data to digital annotation at the second annual Annotating All Knowledge Coalition face-to-face meeting, co-located in Berlin with FORCE2017.
Join the conversation connecting FAIR data to digital annotation at the second annual Annotating All Knowledge Coalition face-to-face meeting, co-located in Berlin with FORCE2017.
Take a deep dive into open annotation 31 July–4 August, 2017: two intensive courses at the FORCE11 Scholarly Communications Summer Institute.
Originally posted at Pundit by Francesca Di Donato The diffusion and the public endorsement of data FAIRness has been rapid. The FAIR Data Principles were were published in late 2014 and early 2015. […]
Hypothesis is enjoying robust use in the sciences: in STEM education (e.g., Science in the Classroom), as a tool for scientists to critique reporting of science in the popular press (e.g., Climate Feedback), for journal clubs and by individual researchers engaging in public or private group discussions on scientific papers. Some of these uses are conversational, as Hypothesis originally envisioned: people ask questions, get answers, make comments. Other annotations are more formal and authoritative; experts extract structured knowledge from the literature, annotate gene sequences with biological information or supply clarifying information to published works.
The Annotating All Knowledge Coalition was founded as a forum for accelerating the development of a pervasive interoperable annotation layer across all scholarly works. Figuring out what, exactly, an interoperable annotation layer means was one of the first goals of the coalition. We took the first steps towards defining what an interoperable layer looks like and how it should operate at our Face to Face meetings at FORCE2016 and I Annotate. So what are the next steps?
Participants in both events felt strongly that the best way to move forward was to “Just do it”, that is, identify a use case where you have a need to share annotations across: tools, content, platforms, workflows.
You might think that neuroscientists already have enough brains, but apparently not. Over 100 neuroscientists attending the recent annual meeting of the Society for Neuroscience (SFN), took part in an annotation challenge: modifying scientific papers to add simple references that automatically generate and attach Hypothesis annotations, filled with key related information. To sweeten the pot, our friends at Gigascience gave researchers who annotated their own papers their very own brain hats.
But handing out brains is not just a conference gimmick. Thanks to our colleagues at the Neuroscience Information Framework (NIF), Hypothesis was once again featured at SFN, the largest gathering of neuroscientists in the world, attended by well over 30,000 people in San Diego Nov 12-16, 2016. The annotation challenge at SFN was a demonstration of a much larger collaboration with NIF: to increase rigor and reproducibility in neuroscience by using the NIF’s new SciBot service to annotate publications automatically with links to related materials and tools that researchers use in scientific studies.
I joined Hypothesis because I believe that an open annotation layer can serve as a dynamic, unifying technology for addressing some of structural weaknesses in our current biomedical platforms.
At long last, I’m able to sit down and summarize my thoughts and experiences on Hypothesis at the Society for Neuroscience meeting in Chicago, Oct 17-21st. First of all, a […]
Hypothesis is exploring the use of on-line annotation to provide review and enhancement of Wikipedia articles. The Neuroscience Wiki Project encourages the neuroscience community to improve the accuracy and robustness […]
For as long as we have produced scholarly works, we have annotated them. From scribbles in the margin, to underlines and highlights, to learned commentary providing additional information, academics routinely […]