OCLC has been publishing bibliographic linked data since 2012. Since then we have published three major datasets as linked open data: WorldCat.org, WorldCat Works and OCLC Persons.
As we continue to work on creating and publishing linked data, a key goal is to demonstrate how this technology fits into library workflows and how it can add value for both librarians and users.
As part of this process, OCLC Research has used the recent work of Google Research as inspiration for the development of a Knowledge Vault pipeline. The focus of our effort is to develop a pipeline process that allows for the harvesting, extraction, normalization, scoring/weighting and synthesizing of knowledge from Authority files, bibliographic records and eventually resources from across the Web such as WikiData or user-contributed feedback.
The Knowledge Vault will contain a set of vetted, linked data triples that can be used by next-generation library services and applications to help improve the end user’s discovery experience. The end goal of the Knowledge Vault work is to prototype and test new library data workflows and demonstrate how the resulting Knowledge Vault can be used to help improve library services and application.
At the OCLC Research Update at ALA Midwinter in Boston, I provided a brief (15-minute) overview of our efforts on the pipeline to date, and a demo application to show how it might work within a library discovery service. If you want to see just the demo, skip to the 10-minute mark.
We’re hopeful that in the future we’ll be able to build in more opportunities for user feedback loops. In this way, we can leverage the strength of the community to improve the quality and utility of our metadata as people use library discovery services.
Share your comments and questions on Twitter with hashtag #OCLCnext.