Liveblogging Peter McCracken’s update on KBART
OpenURL overview: evolution from magic to sausage making in how it is implemented and how information gets passed around. when the link resolver fails it affects the user’s perception of the tool
bad data, bad formatting, lack of knowledge
what is the measure of success? better access, fewer false positives and negatives. number of links should equal number of access points available
History of KBART: 2007 UKSG research report, led to collaboration between UKSG and NISO
Better data for everyone: providers, processors, presenters, users
Core working group + monitoring group. Anyone can join monitoring group.
Problems w/ OpenURL: 3 main onese are inaccurate data leads to bad links, incorrect implementation, lack of knowledge
Lack of knowledge: some providers just don’t know about OpenURL – need education
Incorrect implementations: help providers determine what’s working, what isn’t, need more and better examples. opportunity to standardize transfer of data. Adam Chandler Cornell project to look at source OpenURLs.
Inaccurate data: what to do? grade? police? shame? biggest problem to solve. coverage data especially. education on why it matters.
KBART deliverables: report and provide guidance on these problems. offer best practices guildines for how to effectively transfer accurate data. better understanding of supply chain.
How to deliver it: FTP tab delimited, separate files for each db, as often as necessary. standardized file name structure. guidance on how to provide coverage, what info to include – defined fields. defining how to represent certain kinds of data, e.g. embargo data. Much discussion about what to include vs. what to point to e.g. with a DOI.
Error reporting – how? link resolvers vs. content providers doing correction. public error reporting db?
Education sections, FAQ, website – who would maintain?
Next steps: library specific data, consortial package work, non-textual resources. standards?
(Had to leave this a couple minutes early.)