Biblio Citation Abstract
Beghtol, C. (2001).  Knowledge representation and organization in the Iter project: A Web-based digital library for scholars of the middle ages and Renaissance. Knowledge organization. 28, 170–179.

The Iter Project (iter means path or journey in Latin) is an internationally supported non-profit research project created with the objective of providing electronic access to all kinds and formats of materials that relate to the Middle Ages and Renaissance (400-1700) and that were published between 1700 and the present. Knowledge representation and organization decisions for the Project were influenced by its potential international clientele of scholarly users, and these decisions illustrate the importance and efficacy of collaboration between specialized users and information professionals. The paper outlines the scholarly principles and information goals of the Project and describes in detail the methodology developed to provide reliable and consistent knowledge representation and organization for one component of the Project, the Iter Bibliography. Examples of fully catalogued records for the Iter Bibliography are included.

Bizer, C., Heath T., & Berners-Lee T. (2009).  Linked Data - The Story So Far:. International Journal on Semantic Web and Information Systems. 5, 1–22.

The paper reviews the structure, activities, outputs, and applications featured in the Linked Data initiative. At its most basic level, Linked Data is "simply about using the Web to create typed links between data from different sources." Bizer, Heath, and Berners-Lee argue that "[T]he most visible example of adoption and application of the Linked Data principles has been the Linking Open Data project." The aim of this project is to identify existing data sets, convert them to RDF, and publish them on the web. One of the main benefits of RDF/Linked Data is that is allows clients to "navigate between data sources and to discover additional data." Bizer, Heath, and Berners-Lee address the various challenges facing Linked Data such as user interaction paradigms, link maintenance, and application architecture. To conclude, Bizer, Heath, and Berners-Lee argue that the increase in incorporating Linked Data principles is strengthening and revolutionizing the Web.

Crupi, G. (2013).  Beyond the Pillars of Hercules: Linked data and cultural heritage. JLIS. 4, 25.

The purpose of Linked Data is to develop a total data space (the data web) able to mutually connect and enrich shared databases. Libraries therefore have the opportunity to integrate the structured information of their catalogs with information from other multiple sources and to make them more accessible by building them on web standards. The ability model the data, making them accessible and preserving the contextualization is proposed as a criterion for determining the quality of a library. The article deals with the essential articulation of semantic web and its application in the universe of libraries, and the opportunity to use shared languages, meta-languages, controlled vocabularies and ontologies that are able to meet the need for automatic processing.

Besser, H. (2002).  The Next Stage: Moving from Isolated Digital Collections to Interoperable Digital Libraries. First Monday. 7,

This article addresses the development of the digital library. Howard Besser begins by comparing the values of the traditional library to early iterations of the digital library. Besser argues that the is traditional library is defined by its service-oriented agenda and promotion of ethics. These are two qualities of many that were missing when the digital library emerged in the mid-1990s. Besser stills envisions that digital library in the development stages - not nearly as fully formed as the traditional library. Besser promotes the movements towards user-centred information architecture and the development of metadata standards as two places where the digital library must continue to grow. As a best practice, Besser argues that the digital library should be in tune to its users and uses, and should move from being isolated to interoperable.

Renear, A. H., Wickett K. M., Urban R. J., Dubin D., & Shreeves S. L. (2008).  Collection/Item Metadata Relationships. Proceedings of the International Conference on Dublin Core and Metadata Applications.

This article addresses the problematic exclusion of collection-level metadata in the retrieval and study of archival objects. In 2007, this team of researchers secured a three-year IMLS grant to develop strategies to combat the issue of exclusion. This project consisted of three overlapping phases: "a logic-based framework of collection/item metadata relationships that classifies metadata into categories with associated rules for propagating or constraining information between collection and item levels; empirical studies to see if our conjectured taxonomy matches the understanding and behavior of metadata creators, metadata specification designers, and registry users; and, pilot applications using the relationship rules to support searching, browsing, and navigation of the DCC Registry." At the time of writing, three kinds of metadata relationships had been defined: attribute/value-propagation, value-propagation, and value-constraint. This article concludes with the team's suggestions for future research.

Wheeles, D. (2010).  Testing NINES. Literary and Linguistic Computing. 25, 393–403.

This article chronicles the evolution of the NINES (Nineteenth-Century Scholarship Online) project. Composed by past project manager Dana Wheeles, this article examines NINES from the creation of Collex to their relaunch of a redesigned site built to accommodate their massive growth in users. Wheeles delves into the project's challenges with funding and formal testing. She details the various user experiments - such as eye and interface testing - that the project undertook to receive user feedback. Wheeles showcases how decisions regarding the site design and layout of NINES are founded in critical scholarship and important research. To achieve the goal of becoming a site of excellent digital scholarship while maintaining a friendly environment, NINES implemented a private homepage feature that allows the user to develop their own NINES identity. Wheeles emphasizes the project's commitment to peer review and site aggregation.

Newton, D., & Tellman J. (2010).  A Comparison of the Iter Bibliography and the International Medieval Bibliography. Reference & User Services Quarterly. 49, 265–277.

This article conducts several comparative experiments studying the usefulness, strengths, and materials archived in the Iter and International Medieval bibliographies. Newton and Tellman argue that because no comparative research has been carried out, their exploration is filling a gap in scholarship. Newton and Tellman approach their study of the databases through three different methods: an article search, a dissertation search, and a keyword search. Overall, the results showed that Iter performed better on both the journal and dissertation searches. On the keyword search, the databases performed equally but excelled in providing material for opposing areas. The authors conclude that users should consult both resources for the best and fullest results.

Shreeves, S. L., Riley J., & Milewicz L. (2006).  Moving towards shareable metadata. First Monday. 11,

This article defines and discusses the benefits of shareable metadata. Citing "protocols like the Open Archives Initiative Protocol for Metadata Harvesting (OAI–PMH) and common metadata encoding schemas such as Dublin Core (DC)," this article argues that users win when metadata is shared - especially users who research interdisciplinary topics. Shreeves, Riley, and Milewicz contend that the full potential of metadata sharing has yet to be realized, and that the lack of consistency, amount of information, lack of contextual information, and lack of conformance to technical standards are what are contributing to this loss in value. Shreeves, Riley, and Milewicz list the following qualities of shareable metadata: content optimized for sharing, consistent practices, coherent information, contextualized, communicable, conforms to standards. In conclusion this team of researchers emphasizes again the importance of consistent practices and critical engagement with metadata creation.

Halbert, M. (Submitted).  Integrating ETD Services into Campus Institutional Repository Infrastructures Using Fedora.

This article discusses how an Electronic Thesis and Dissertation (ETD) repository infrastructure can provide a comprehensive institutional repository framework using the Fedora software package. The shift to digitization has resulted in a challenge for research libraries that centers on creating a comprehensible infrastructure to manage the immense intellectual content¬ in its different forms. Instead of approaching this process with “The Law of the Hammer” standardization approach that reduces all the content into illogical structures, the team geared towards Web 2.0 approaches that allow for standardization without compromising flexibility. After a long process of assessing which open source software package would best meet their needs, the Emory University team chose Fedora and implemented it alongside the Fex system, which successfully laid down the foundation for subsequent systems to support many of the existing tools and models. Halbert also argues that a systematic institutional repository framework is best built by a modular, standards based approach guided by direct user feedback.

Lehmberg, T., Rehm D. Georg, Witt D. Andreas, & Zimmermann F. (2008).  Digital Text Collections, Linguistic Research Data, and Mashups: Notes on the Legal Situation. Library Trends. 57, 52–71.

This article discusses the challenges associated with archiving and providing legal access to large corpus linguistics data sets. Given the heterogeneous and layered nature of these collections, the authors of this article argue that the data is often rendered inaccessible due to the copyright law. Three specific case studies are presented to illustrate these challenges. The article suggests that the copyright dealings for corporate mash up data – data generated by website users – would be useful to consider as precedent in these cases. The authors of the article refer to Google Maps data specifically as being a useful example of layered data collection.

Kobilarov, G., Scott T., Raimond Y., Oliver S., Sizemore C., Smethurst M., et al. (2009).  Media Meets Semantic Web – How the BBC Uses DBpedia and Linked Data to Make Connections. (Aroyo, L., Traverso P., Ciravegna F., Cimiano P., Heath T., Hyvönen E., et al., Ed.).The Semantic Web: Research and Applications. 723–737.

This article discusses “how the BBC is working to integrate data and linking documents across BBC domains by using Semantic Web technology, in particular Linked Data, MusicBrainz, and DBpedia.” BBC has been publishing its data through separate, stand alone sites which are maintained by different teams, making it difficult to link and search across these different domains. In order to improve the BBC online presence, BBC, Freie Universität Berlin and Rattle Research are working on creating a common controlled vocabulary and interlinking the content to support contextual, semantic navigation and to help classify all the BBC online content. Although only a small portion of the content has been updated, the team is continuing to work on this project and is positive that BBC and their users will benefit largely from this connected ecosystem of content.

Xu, C., Ouyang F., & Chu H. (2009).  The Academic Library Meets Web 2.0: Applications and Implications. The Journal of Academic Librarianship. 35, 324–331.

This article examines the implementation of Web 2.0 tools - blogs, instant messaging, information sharing, RSS feeds, social networking, and wikis - in 81 academic libraries in New York State. The authors of this article begin by conducting a literature review which reveals that, while some publications have discussed Web 2.0 conceptually, very few articles have looked at actual implementation. The study of 81 libraries reveals very limited use of Web 2.0 technologies. The authors suggest that moving forward libraries should strive to be open, interactive, convergent, collaborative, and participatory by taking advantage of Web 2.0.

Palmer, C. L., Zavalina O. L., & Fenlon K. (2010).  Beyond size and search: Building contextual mass in digital aggregations for scholarly use. Proceedings of the American Society for Information Science and Technology. 47, 1–10.

This article examines the importance of federated collections as anchors in humanities research of the "Google age." The authors begin by identifying some of the potential problems of developing federated collections: retaining identity's of individual collections, uncovering connections within collections, and aggregating resources in a way that is useful to the user. The article examines the IMLS Digital Collections and Content initiative, which has developed an aggregation strategy and is now the largest digital cultural heritage federation in the USA. The strategy is guided by the "principle of contextual mass”: “collecting materials that work together as a system of sources, with meaningful interrelationships between different types of materials and subjects, to support research inquiry." Alongside developing a system for processing and identifying subject concentrations, the initiative wants to develop a method for making these concentrations explicit to the users. The authors argue that, unlike other federated systems, "OH strategy uses collection description information to disclose content in a way that supports the scholarly practices of exploring and engaging with specialized materials within and across collections."

Han, M-J., Cho C., Cole T. W., & Jackson A. S. (2009).  Metadata for Special Collections in CONTENTdm: How to Improve Interoperability of Unique Fields Through OAI-PMH. Journal of Library Metadata. 9, 213–238.

This article explores 21 digital library collections from 11 institutions to interrogate how items in special collections are being described. The authors discuss the differences between custom metadata schemes and the OAI-PMH protocols. While the overall goal of shared metadata is the achievement of interoperability with maximum context, sometimes the richness in meaning loses its quality outside of its local collection. In their experiment the authors found that 491 metadata fields were used across the collections but 171 were locally defined and, therefore, not shareable or standardized. In conclusion, the authors argue that localized metatdata is still a common practice but that institutions seem to be trying their best to abide by the OAI-PMH standards. However, sometimes the advantages of a specifically defined, local field outweigh the advantages of sharing generic information.

Hyvönen, E., Mäkelä E., Salminen M., Valo A., Viljanen K., Saarela S., et al. (2005).  MuseumFinland—Finnish museums on the semantic web. Selcted Papers from the International Semantic Web Conference, 2004 ISWC, 2004 3rd. International Semantic Web Conference, 2004. 3, 224–241.

This article explores how the semantic web can help build connections in diverse cultural collections. MuseumFinland is deployed as the case study for this research. MuseumFinland boasts a collection of over 4000 artefacts across 260 historical sites that roughly divided into 7 ontologies. The main goal of MuseumFinland is to "to provide the end-user with semantic association links relating collection contents with each other" through the use of single entry point searching. The design, creation, launch, and user engagement with MuseumFinland is discussed at length in this publication to showcase the "power of semantic web technologies to solving interoperability problems of heterogeneous museum collections when publishing them on the web."

Shreeves, S. L., Knutson E. M., Stvilia B., Palmer C. L., Twidale M. B., & Cole T. W. (2005).  Is 'Quality' Metadata 'Shareable' Metadata? The Implications of Local Metadata Practices for Federated Collections.

This article explores the characteristics of quality metadata and shareable metadata to investigate whether these two concepts are the same, linked or mutually exclusive. Recent research has begun to develop a taxonomy of characteristics necessary for quality metadata. Some of the commonly noted characteristics are, "completeness, accuracy, provenance, conformance to expectations, logical consistency and coherence, timeliness, and accessibility." For this research study, the team analyzed four collections of metadata each conforming to different standards and "performed a descriptive statistical analysis of the use and frequency of Dublin Core elements for each collection." In conclusion, while Shreeves et al. admit that their study was limited, the results point to "at least two specific strategies": consistency can allow normalization of metadata and eliminating ambiguity helps metadata interpreters describe items specifically.

Stephens, M., & Collins M. (2007).  Web 2.0, Library 2.0, and the Hyperlinked Library. Serials Review. 33, 253–256.

This article explores the concepts surrounding Web/Library 2.0. The publication begins by exploring the key values of the Web/Library 2.0 movements: conversation, community, participation, experience, and sharing. These values are further expanded upon and illustrated through the authors' description of social tools that facilitate these objectives. Some of these tools include blogs, podcasts, wikis, and social networks. Stephens and Collins are clear that the Library 2.0 movement must critically consider the philosophy behind these tools in order to deploy them usefully. The article concludes by discussing the hyperlink library initiative.

Tenopir, C., King D. W., Spencer J., & Wu L. (2009).  Variations in article seeking and reading patterns of academics: What makes a difference?. Library & Information Science Research. 31, 139–148.

This article explores the differences in information seeking and reading behaviour of faculty in academic institutions. The authors of this article conducted a survey of seven American and Australian institutions through a self-reported web survey. The 1688 respondents were asked questions related to a variety of demographic and contextual factors relating to their position as readers. The authors were interested in uncovering how academic article fit into the process of scholarship today. The survey results indicated that subject discipline and work responsibility were determining factors in reading behaviour while productivity, age, and purpose of reading had less of an effect.

Stvilia, B., & Jörgensen C. (2009).  User-generated collection-level metadata in an online photo-sharing system. Library & Information Science Research. 31, 54–65.

This article explores user-generated metadata for collections of photos on Flickr. The authors argue that collections are an important building block of information organization as they identify common characteristics between objects. In their analysis of photo collections on Flickr, the authors found that most of the data is unstructured. Photos are divided into types or sets and, in some instances; informal peer reviews of photos are conducted through online commenting. In light of their findings, authors argue that metadata schema must be developed to align with anticipated user needs and interests.

Stvilia, B., & Gasser L. (2008).  Value-based metadata quality assessment. Library & Information Science Research. 30, 67–74.

This article proposes a method that allows a value-based assessment of metadata quality and construction of a baseline quality model. The method is illustrated on a large-scale, aggregated collection of simple Dublin core metadata records. An analysis of the collection suggests that metadata providers and end users may have different value structures for the same metadata. To promote better use of the metadata collection, value models for metadata in the collection should be made transparent to end users and end users should be allowed to participate in content creation and quality control processes.