* Written by Jenn Riley *
Article discussed: Greenberg, Jane. (2005). “Understanding Metadata and Metadata Schemes.” Cataloging & Classification Quarterly 40, no. 3/4: 17-36.
The discussion began with a general question: Does the MODAL framework appear to be a useful way of evaluating metadata schemas? The group in general thought it was, although expressed concern that some of the language in the article was very academic, which sometimes made it difficult for practicing librarians to follow the argument.
Participants appreciated the fact that some metadata schema such as TEI (p. 28 of the article) have as a stated principle the conversion of resources to newer communication formats. This principle is of great benefit, and would be useful for other metadata schemas as well. Data formats will not stay static – our metadata must adapt its format over time to accommodate new ways of communicating.
Some participants noticed a contrast between the design of metadata schemas based on experience and observation and library cataloging rules that are more formalized and change less frequently. This observation led to the question of whether cataloging rules should be more fluid. When the rules do change, the changes are based on experience. From an implementation point of view, it is difficult both for libraries and our users if the rules are constantly changing. Our legacy data is a very real consideration here. So how do we be flexible and adaptable but at the same time consistent and keep up with the legacy data?
The MODAL framework spoke to participants as an analysis tool – helping evaluate the fitness of a given schema for a given purpose. This gets us away from saying a metadata format is “bad” – rather it lets us say that records using the Dublin Core Metadata Element Set are not well-fit to handle FRBRized data, for example.
The article’s methodology of bringing in Cutter’s objectives as an example of underlying objectives and principles sat well with the discussion group. One participant noted that not many current studies do this. These assumptions can help us focus our efforts. Follow up work could to do some comparison of Cutter’s objectives to different metadata formats.
Terminology issues were a hot topic of discussion at the session. Participants thought some kind of collaboratively-developed metadata glossary would be a good idea. They felt it was important for librarians interested in metadata issues to learn new vocabularies. We need to read more, ingest as much as possible, make connections to what we already do. “Cardinality” was an example of a term which was unfamiliar – it brings in the repeatable vs. not repeatable notion that is familiar, but also covers required/not required. Domains do have specialized vocabularies – they serve as “rites of passage” into various professions. Metadata schemes all have context that assumes a specific knowledge base – this article recognizes that. It would be nice if articles had glossaries, though.
Even with discussion, definitions of some terms did not establish a clear consensus. The term “granularity” was defined in the group as “refinement,” “the amount you want to analyze down to,” “extent of the description,” “specificity,” and “granular means you can slice in different ways.”
Participants appreciated the empirical focus of the article, saying that metadata schema design should be observation/experiment based. It’s certainly a good thing to have metadata be practical – actually useful. To help decide what metadata schema to use, try out a couple schemas and see how they work, rather than thinking more abstractly. But also need consider community as a factor. The MODAL framework is “multi-focal” – focusing first on one aspect then go to another. Helps implementers think, for example, about both the community and the data itself.
Participants noted two schools of thought for metadata design: a difference of orientation thinking of a problem looking for a solution, as contrasted with a solution looking for a problem. Is there sill room for cataloger judgment? Absolutely. Perhaps cataloger’s judgment is needed more in the application of a content standard rather than a structure standard.
This distinction led participants to speculate whether the line between the two is blurring (although all recognized it has always been somewhat blurry). RDA especially seems to be trying to do both simultneously. One participant noted that libraries seem to be moving to blur the two, while other communities are moving to separate them more.
Is terminology the only barrier to learning more about metadata? Some individuals learn better with theory and others with practice. All need a little of both. It really just takes time – remember what it was like to learn cataloging? Getting out of one’s comfort zone is difficult. It’s also difficult to be adventurous, when there is less precedent to follow. It’s hard to learn many standards – don’t always know which one to use. When you have to learn lots of things, you learn each of them less well. We also have new objectives, including reaching new people and operating in additional systems. It would be helpful to identify models of other institutions where a technical services unit has made significant progress in these areas.
The group found Table 1, which outlines some typologies of metadata schemas, to be interesting. The lines between them seem arbitrary at worst and murky at best. Over time the thinking in this area has gone from 7 categories to 4 – does this mean our community is looking for simplicity? Does this mean this environment is settling down? Maybe, but initiatives such as the DCMI Abstract Model seem to be going the other direction.
The discussion moved relatively seamlessly from topic to topic, and featured a number of insightful comments, often from new participants. Both nitty-gritty and “big picture” issues were raised. Thanks to all who participated for an enlightening discussion.