Homepage Content Strategy

Georgy Cohen recently wrote an article on Meet Content about developing a strategy for content on a homepage. It is often argued that a homepage isn’t as important today because of how a user accesses content. While this may be true for some websites, it is definitely a myth regarding academic library homepages.  A well-built academic library homepage creates a positive brand statement and efficiently guides the user towards the needed content through consistent “information scent”.  I think the following academic library homepages are noteworthy and are examples of well-organized content.

Harvard Libraries: This recently redesigned homepage put the search tool front and center, but also provides descriptions of library jargon and academic sources. Initially I didn’t know what HOLLIS was but beneath the search box a quick description described the resource. I was also drawn to the red icons in the right resources sidebar. This breaks up the text and draws attention to popular services.

Ithaca College Library: This homepage is one of my favorites because it is simple and efficient. This site only uses one drop down menu, while the rest of the toolbar resembles a mobile layout design, with key content, like books and articles, in large text. I was able to find the link to JSTOR in seconds.

Marygrove College Library: This homepage is one of the few academic libraries that efficiently uses drop down menus. There are also only three columns of text which cuts down on unnecessary front page content which can often be distracting from the main toolbar.

Northeastern University Libraries: This homepage also has a toolbar with numerous drop-down menus, but each item in the drop-down is paired with a one sentence description. This is most useful for the new library user or those unfamiliar with library jargon.

Blacklight and Stemming

With the coming transition of the IUCAT public interface from the existing SIRSIDynix OPAC to the new Blacklight discovery layer there are a lot of exciting new features coming our way. Some examples include faceted searching, better results, an easier to use interface. Along with the change in the interface, we will see changes in how search works. One of these changes relates to truncation and word stemming.

Truncation is the ability to expand a keyword search to retrieve multiple forms of a word by using a specified symbol to replace a character or set of characters. The truncation symbol can typically be used anywhere within a word: at the end, beginning, or within a term. For example in the current IUCat a search for comput$ would find words such as:  computer, computers, computing, and computation. Truncation is a handy tool that can help bring back a lot of different results and it is a common search feature in most traditional OPACs and in many vendor databases. Blacklight, like other discovery layer interfaces such as VuFind, relies on a technique called word stemming rather than on truncation.

Word Stemming is when the catalog searches for the “root” of a word and displays all words with that stem. Rather than relying on the searcher to place a specific character to expand the search as in truncation, the use of word stemming initiates an automatic search for the “root” of a search term, then returns results with all words associated with that stem. This is similar to how Google searches, so users who use Google a lot won’t notice much of a difference.

Because this is an automatic process, oftentimes it is difficult or impossible to know or predict the “stem” terms for any particular word. For example, knees has a stem of knee, but kneel has a stem of kneel not knee. Another example of stemming is when you type the word “searching” or “search” or “searches” you’ll find they all stem to “search”. But “searcher” does not; it stems to “searcher”.

For searchers who are accustomed to truncation, there may be similar terms that would have been retrieved using truncation, but which will not be retrieved using word stemming because they do not share the same stem.

For many of our users, this change will not be apparent, but we hope this is a helpful explanation of this change for expert searchers accustomed to relying on truncation.

Visual browsing in a virtual world

Searching books online cannot compare to the experience of getting lost in the stacks of your local library or bookstore. Browsing is one of the primary pleasures of all book-lovers. Finding that precise book you were looking for is great, but discovering something unexpected is often better. Whether for pleasure or research, browsing is one of the best methods by which to find new reading material. As books are moved out of sight in favor of computer stations and as users become more and more reliant upon online searching, it becomes increasingly necessary to recreate this real world experience of browsing in digital land. Libraries are moving progressively toward visual searches and virtual shelf browsing in the ongoing crusade of bringing readers and books together.

Virtual shelf browsing is by no means a new concept. Library Thing, launched in 2005, is an online service that helps users to catalog and browse their (and their friends’) books. The visual interface is intended to replicate the experience of browsing around for favorites or new finds. It presents items as a collection of book covers, much like the user would see if searching through her own personal library at home. Users can even upload different covers to enhance the experience of physicality.

Hobbit covers

In 2008 Amazon Web Services launched Zoomii, an online book browsing tool that allowed users to scroll through books by genre and zoom in or out on a particular section of the “bookshelf.” This recreated the process that many people go through when in a bookstore – zooming in on a favorite author, then zooming out to see what else might be of interest, then zooming in again when something catches their eye.

It might seem that with larger collections numbering in the millions, such a virtual browsing experience runs the risk of becoming taxing for those maintaining the system and overwhelming for those attempting to use it. In 2010 North Carolina State University (NCSU), boasting a collection of 4 million volumes, proved that theory wrong. It released Virtual Shelf Browse, open source software that allows library patrons to search the shelves around a selected book or call number. Try it out yourself in the NCSU library catalog.  Search for a book, select a record, then click on the “Browse Shelf” button on the right hand side of the record to scroll through their collection by call number.

NCSU book browse

Strict call number browsing is not the only way to give patrons that same experience of discovery. OneSearch@IU presents materials found in IUCAT in a more approachable way. Each record displays a book cover, when available, to draw the user in visually. At the bottom of the record are the “Similar Books” and “Other Books by this Author” options with user-friendly scroll bars that offer patrons another way to explore the collection and unearth new reads.

Browse IUCAT

Discovery: questions & answers

As many of you may know, DUX has been working for nearly a year to implement EBSCO Discovery Service (EDS), and we’re happy to be launching for the fall semester as a new, improved OneSearch@IU.

Let’s start at the very beginning – a very good place to start, as they say. We’ve asked and answered a lot of questions throughout this process, and this post will focus on a few that I think are most fundamental.

What is a discovery tool, anyway?

Here’s one way of thinking about it: a discovery tool integrates a collection of disparate data sources so that search results are presented as a single, merged set.

How is this different from federated search?

I’m so glad you asked! It’s true, federated search products allow a single query to be simultaneously delivered to multiple information resources, and then collect those results and display them as a single set.  To accomplish this, the tool must generally rely on “translators” which enable communication with the varied sources, with varying levels of success. Also, the ability to include content in the search is dependent on the existence of a translator. In contrast, a discovery tool relies on a unified index created by bringing together data from a wide array of publishers, vendors and other sources (including library catalogs and institutional repositories) into a single integrated set. This results in improved relevancy ranking, and the ability to broaden the scope of searches to include local and subscribed content, and both print and digital materials from an array of disciplines.

This is better how?

While not exactly apples to apples, it’s a whole lot closer – one big set, indexed “all of a piece” improves relevancy across the board to increase the precision of the results returned. Catalog records may be bananas, but it’s a lot easier to properly weight the distribution of bananas and apples if you can put them in a single barrel, then teach the system to recognize them and sort accordingly. (Actually, I think I know what’s bananas – and it’s this illustration.) Also, the discovery tool typically presents an attractive interface designed to meet user expectations for ease-of-use, sharing, and other functions common to commercial sites such as Amazon or Google.

What does EDS include?

A quick answer to that question is: IUCAT records, all EBSCO content, and content from a large number of other vendors & sources (including Wilson, JSTOR, Elsevier, GPO, HathiTrust, Sage, MUSE, Web of Science, Wiley-Blackwell, Alexander Street Press, and others).

Who’s going to use this? Are we aiming this at undergraduates?

Clearly, this sort of tool is likely to appeal to undergraduates with its single search box,  interdisciplinary coverage, lots of full text, and easy export/print/share capabilities. I’d venture to propose that those same features might find fans amongst other user groups. I don’t think it’s going out on a limb to say that while the ways, or the reasons, that graduate students, faculty and researchers might use this tool may differ from those of undergraduates, there are plenty of use cases for those groups too. Personally I’ve found it very helpful to do a quick survey of what we have on a topic for myself, or at the reference desk – I like being able to easily retrieve articles, books from the collection, and other items with a single search.

Searching for answers

And now for the exciting conclusion … this post is a continuation of last week’s post on search behaviors, inspired by Jakob Nielsen’s recent article.

The problem, simply stated: For early adult users in particular, lots of things to search, too many results, how to choose rightly?!

There is a long, distinguished list of brighter minds than mine who have addressed this problem. Nevertheless, here are some of my thoughts on how to make progress:

Information literacy (or fluency, if you prefer). As an academic library, does not nearly everything we do begin and end with teaching? It’s so easy to agree with Nielsen about teaching the people to fish: we know that so many of them are figuratively standing in the middle of the creek making a grab, and they’re getting hungry. Thank you, and keep fighting the good fight, instruction librarians everywhere. [Here’s a special shout out to the good folks of our Teaching & Learning department.]

Specifically, it’s a high priority for DUX to enhance our current class pages so that they better meet the needs of our teaching librarians and our teaching faculty as they work together to support and facilitate student learning at all levels. For other ideas related to this, see point three below.

Better discovery. First, if we want civilians to use library search interfaces – voluntarily and joyfully, anyway – they need to be much, much more like Google or Amazon. Rest assured, I too have a deep and abiding love for the power of peer review, scholarly content, controlled vocabularies, indexing, and their noble brethren. (Please don’t run me out of town on a rail!) But, really – who wouldn’t prefer a friendlier, more responsive IUCAT, for example? In a world where quality content and fantastic interfaces co-exist happily, even experts will love being able to do what they need to do more efficiently and more easily. There’s a lot of power in leveraging our end-users’ existing mental models, particularly as a starting point for novices. Once we hook that unsophisticated user with some positive experiences, she’ll be more ready for us when we roll out the specialized resources and advanced functionality that information professionals know and love.

Second, if as Nielsen said, people are treating search engines like ‘answer engines,’ then we are uniquely positioned in our ability to load our discovery resources with good answers … in a ‘chocolate is good for you’ way, not in a ‘here’s a bran muffin for Halloween because it’s healthy, nevermind that kid over there with the king size candy bar’ way. Up to now, I’m guessing the complex trajectory from identified information need (AKA assignment?) to PDF-in-hand feels more like the latter than the former.

Bringing this back to IUB: EBSCO Discovery Service (EDS) is one obvious way to reach the “early adult” population Project Information Literacy talks about, and we at DUX have been working towards implementing this resource, checking and double-checking how catalog records display in the interface, which features to enable and which to switch off, and thinking a lot about how best to integrate its results into the Resource Gateway. Look for big action on this front very soon – like, this summer.

EDS isn’t the only thing, though – the integration of a discovery layer as the public interface for IUCAT is going to be a huge step forward in this area, and a system-wide task force is working away to evaluate the two candidate applications, VuFind (example: Mirlyn [Michigan]) and Blacklight (example: Searchworks [Stanford]). If all goes to plan, we should all be basking in a new OPAC as soon as next June.

Contextualizing information. The world isn’t simple. Neither are library websites – and across our profession, we are engaging with the hard work of eliminating unnecessary institutional complication from the inherent complexity of scholarly information and the research process.

Let’s frame the user’s experience in a way that helps them process what they see … and let’s do it invisibly and automagically, whenever possible. In some cases this is going to mean beginning by presenting fewer choices, and trusting our users to dig deeper to more comprehensive listings when they are ready. This idea can be hard for us to accept – but careful curation is everything. Imagine a huge empty wall in a museum: first, fill it with paintings; then, picture it with only three. What does this say about focus of attention?

In other cases, it’s going to mean finding ways to dynamically deliver relevant help – a project near and dear to my heart, and one that has a high profile on the DUX radar, is the development of a system that will allow us to do just this across our website and within IUCAT, too. We do a good job of embedding mechanisms for feedback (IM, email) and we can continue to seek opportunities to expand as vendors enable this functionality within their interfaces, and as we update and redesign our mobile presence.

Rendering the intricacies of our many-faceted collections, services and resources into something that’s simple enough for a novice, but powerful enough for an expert, might be the one of the very hardest – and most worthwhile— things we could ever do. Now, I’m going to wrap up this post so I can flee the building before everyone reads what I said about Google …

More food for thought
A great article from A List Apart: You Can Get There From Here: Websites for Learners
Some comments on mental models from Nielsen
A nice brief excerpt from an interview with usability expert Don Norman
Steve Krug on How We Use the Web from Don’t Make Me Think
Again, Project Information Literacy

When Not to Google

You’re familiar with Google, of course – as are the faculty and students that you work with. You probably know of one or two others – Bing, perhaps. If you’ve been around for a while, you probably remember some of the earlier search engines, like Altavista and Yahoo (both of which are still around). But have you ever heard of DuckDuckGo or Blekko? Check out this interesting rundown of a few current (non-Google) search engines – how they work and what they do best – from Lifehacker.

Seek, and keep on seeking …

In his latest Alertbox column, usability guru Jakob Nielsen tells a sad tale of search behavior:

Incompetent Research Skills Curb Users’ Problem Solving

I only wish that the results he reports seemed less obvious, but it felt distressingly familiar – the topic of a thousand conference presentations, committee agendas, casual conversations with colleagues, and internal dialogues across libraryland.

Some highlights, or low points, depending on how you want to look it:

  • By and large, people aren’t very good at searching, and they don’t course-correct well;
  • They will type into any box they can find;
  • A lot of the stuff that’s out there to be found is junk;
  • While technology is making this a little better, none of this is improving fast enough.

So what do we do about it? Nielsen suggests “more education” and better interfaces, and who am I to disagree with that! (Although the fact that he doesn’t once mention the existence of an entire profession of trained searchers and information specialists in reference to the dilemma he presents is slightly deflating. I see yet another call for more and better library PR.)

Of course there’s other, more library-focused research. If you haven’t been reading the very interesting reports published by the Project Information Literacy researchers: yes, they are long, but yes, they are worth it. To quickly sum up: Project Information Literacy, based out of the University of Washington’s iSchool, has been studying how students (early adults, so primarily undergraduates) do research, using a variety of methodologies at a wide array of institutions nationwide. While their results show that students do turn first to course readings for assignment-based research, they have done some work on how students look for non-academic information that echoes Nielsen’s findings: when left to themselves, students aren’t sure how to process what they find.

In the interests of being a bit more specific about actions we might take, I’ll share some ideas of mine … next week! Same bat time, same bat channel: see you there!