Barcodes & QR … a quick scan

One of the nifty things about having a tiny computer (that is, your smartphone) on your person all the time? You have a whole new way to interact with the objects around you. The buzz on QR codes (sometimes called 2-D codes) has steadily grown for the last couple of years — after all, once something’s featured on primetime television, you know it’s catching on.

Libraries and higher education have been busy building services with these technologies as well – here are just a few examples.

Ryerson University Libraries put QR codes in their catalog records to provide another quick way for users to access bibliographic (title, author, etc.) information and location information about the item using their mobile device. Then they took it one step further, and developed their own mobile application for scanning the QR codes, as well as barcodes. Read the write-up in a recent issue of the Code4Lib Journal.

At the University of Waterloo, some students developed a mobile app called QuickCite, which produces formatted citations (MLA, APA, Chicago) from scanning a barcode … and they’re selling it for the low, low price of ninety-nine cents.

The action isn’t all in Canada, though – at the University of Miami in Oxford, Ohio, computer science professor Bo Brinkmann [together with the Miami University Augmented Reality Research Group (MU ARRG!)] has been working on a prototype for a shelf-reading system powered by QR codes. The Android app leverages augmented reality to scan the shelf, identify out-of-order items by their spine codes, and even goes so far as to calculate the fewest number of steps to order them properly. Awesome! Together with two librarians, he gave a presentation at ACRL in Philadelphia last month, and I was really impressed at what I saw (read a write-up of the session).

Have you seen any cool library applications for QR codes or barcode scanners? Feel free to share in the comments.

Searching for answers

And now for the exciting conclusion … this post is a continuation of last week’s post on search behaviors, inspired by Jakob Nielsen’s recent article.

The problem, simply stated: For early adult users in particular, lots of things to search, too many results, how to choose rightly?!

There is a long, distinguished list of brighter minds than mine who have addressed this problem. Nevertheless, here are some of my thoughts on how to make progress:

Information literacy (or fluency, if you prefer). As an academic library, does not nearly everything we do begin and end with teaching? It’s so easy to agree with Nielsen about teaching the people to fish: we know that so many of them are figuratively standing in the middle of the creek making a grab, and they’re getting hungry. Thank you, and keep fighting the good fight, instruction librarians everywhere. [Here’s a special shout out to the good folks of our Teaching & Learning department.]

Specifically, it’s a high priority for DUX to enhance our current class pages so that they better meet the needs of our teaching librarians and our teaching faculty as they work together to support and facilitate student learning at all levels. For other ideas related to this, see point three below.

Better discovery. First, if we want civilians to use library search interfaces – voluntarily and joyfully, anyway – they need to be much, much more like Google or Amazon. Rest assured, I too have a deep and abiding love for the power of peer review, scholarly content, controlled vocabularies, indexing, and their noble brethren. (Please don’t run me out of town on a rail!) But, really – who wouldn’t prefer a friendlier, more responsive IUCAT, for example? In a world where quality content and fantastic interfaces co-exist happily, even experts will love being able to do what they need to do more efficiently and more easily. There’s a lot of power in leveraging our end-users’ existing mental models, particularly as a starting point for novices. Once we hook that unsophisticated user with some positive experiences, she’ll be more ready for us when we roll out the specialized resources and advanced functionality that information professionals know and love.

Second, if as Nielsen said, people are treating search engines like ‘answer engines,’ then we are uniquely positioned in our ability to load our discovery resources with good answers … in a ‘chocolate is good for you’ way, not in a ‘here’s a bran muffin for Halloween because it’s healthy, nevermind that kid over there with the king size candy bar’ way. Up to now, I’m guessing the complex trajectory from identified information need (AKA assignment?) to PDF-in-hand feels more like the latter than the former.

Bringing this back to IUB: EBSCO Discovery Service (EDS) is one obvious way to reach the “early adult” population Project Information Literacy talks about, and we at DUX have been working towards implementing this resource, checking and double-checking how catalog records display in the interface, which features to enable and which to switch off, and thinking a lot about how best to integrate its results into the Resource Gateway. Look for big action on this front very soon – like, this summer.

EDS isn’t the only thing, though – the integration of a discovery layer as the public interface for IUCAT is going to be a huge step forward in this area, and a system-wide task force is working away to evaluate the two candidate applications, VuFind (example: Mirlyn [Michigan]) and Blacklight (example: Searchworks [Stanford]). If all goes to plan, we should all be basking in a new OPAC as soon as next June.

Contextualizing information. The world isn’t simple. Neither are library websites – and across our profession, we are engaging with the hard work of eliminating unnecessary institutional complication from the inherent complexity of scholarly information and the research process.

Let’s frame the user’s experience in a way that helps them process what they see … and let’s do it invisibly and automagically, whenever possible. In some cases this is going to mean beginning by presenting fewer choices, and trusting our users to dig deeper to more comprehensive listings when they are ready. This idea can be hard for us to accept – but careful curation is everything. Imagine a huge empty wall in a museum: first, fill it with paintings; then, picture it with only three. What does this say about focus of attention?

In other cases, it’s going to mean finding ways to dynamically deliver relevant help – a project near and dear to my heart, and one that has a high profile on the DUX radar, is the development of a system that will allow us to do just this across our website and within IUCAT, too. We do a good job of embedding mechanisms for feedback (IM, email) and we can continue to seek opportunities to expand as vendors enable this functionality within their interfaces, and as we update and redesign our mobile presence.

Rendering the intricacies of our many-faceted collections, services and resources into something that’s simple enough for a novice, but powerful enough for an expert, might be the one of the very hardest – and most worthwhile— things we could ever do. Now, I’m going to wrap up this post so I can flee the building before everyone reads what I said about Google …

More food for thought
A great article from A List Apart: You Can Get There From Here: Websites for Learners
Some comments on mental models from Nielsen
A nice brief excerpt from an interview with usability expert Don Norman
Steve Krug on How We Use the Web from Don’t Make Me Think
Again, Project Information Literacy

Linked Data Resources and Webinars

Linked Data” describes the methods used to structure and interlink data so that they may become more useful. Tim Berners-Lee, the inventor of the World Wide Web, gave a TED Talk on linked data in 2009 during which he presents the case for Linked Data as an essential building block in the development of “Web 3.0”, or the Semantic Web. Although the Semantic Web was introduced conceptually nearly 10 years ago, it has only recently begun to gain visibility outside of web science research communities. Below I have listed a few upcoming webinars (some free, some for a fee) that will cover some introductory aspects of Linked Data principles as well as a recently released, free e-book that covers many of the basics of Semantic Web technologies in excellent detail and plain language.

Book: Linked Data: Evolving the Web into a Global Data Space

ASIST Webinars: http://www.asis.org/Conferences/webinars/2011/linked-data.html (March 9 and 15)

ALCTS Webinar: http://www.ala.org/ala/mgrps/divs/alcts/confevents/upcoming/webinar/cat/031611.cfm (March 16)

Wrap-up: Google Analytics webinar series

We certainly enjoyed the recent webinar series on Google Analytics, Library Analytics: Inspiring Positive Action through Web User Data (an ALA TechSource webinar/workshop), and we hope that you did too. If you missed the sessions the first time around, we do have access to the archives, so give us a yell if you’d like to see them.

We also wanted to collect some information here, for easy access. Enjoy!

Session 1: The Basics of Turning Numbers into Action
Continuing the Conversation: ALA Techsource blog post with slides, additional resource links and content

Session 2: How Libraries Analyze and Act
Continuing the Conversation
: ALA Techsource blog post with slides, additional resource links and content

The presenters provided the following list of recommended readings:
Wikipedia Entry: Web Analytics
“About Us” Page, Web Analytics Association
Measuring Website Usage with Google Analytics, Part I
Measuring Website Usage (from http://coi.gov.uk/guidance.php?page=229)
Library Analytics (Part 1)

Arendt, Julie and Wagner, Cassie. 2010. “Beyond Description: Converting Web Site Usage Statistics into Concrete Site Improvement Ideas“, Journal of Web Librarianship, 4: 1, 37 — 54
Black, Elizabeth L.2009. “Web Analytics: A Picture of the Academic Library Web Site User“, Journal of Web Librarianship, 3: 1, 3 — 14
DANIEL WAISBERG and AVINASH KAUSHIK. 2009. “Web Analytics 2.0: Empowering Customer CentricitySEMJ.org Volume 2 Issue 1.

You may also be interested in this recent interview with the presenters, “Paul Signorelli and Char Booth Discuss the Role of Web Analytics in the Library.”

ALA Midwinter

This past weekend, I attended my first Midwinter conference in San Diego. Over the summer, I attended ALA Annual in Washington, D.C., which was, quite frankly, overwhelming. The sheer amount of people at the convention center and at programs made it difficult to navigate the conference and take part in (or even find) programs that I thought would be interesting and educational. However, Midwinter had an entirely different feel, and I think I took away more from this conference than previous conferences I have attended.

Part of the reason I was at Midwinter was to begin my participation in the Emerging Leaders program. For those unfamiliar with the program, it is a program that is designed to encourage younger and new-to-the-field librarians to become involved in ALA, and develop leadership skills and opportunities. EL’s are divided into groups to work on a project that spans from now until the annual conference in June. My project, entitled “Smart Money Week,” is a program focusing on developing financial literacy, and is aimed at libraries. The Chicago Federal Reserve is in the leadership role for the project, and is collaborating with ALA to promote programming and outreach at all libraries, although public libraries are largely targeted. Last year, Naperville Public Library and another library in Wisconsin (the name escapes me now) did a similar program with enormous success. This year, several states are doing a state-wide Smart Money Week-Indiana’s’ will be in October. However, the current goal for Money Smart Week is to have libraries nation-wide participate in the program in April of 2012. (Currently, states do Smart Money Week at various times-Nebraska is having theirs in November). Our EL team has several tasks. The first is to assist with the re-launch of a new website dedicated to the Smart Money Week in 2012-filling in holes in the current website, updating information, and adding new resources as necessary. We will also be creating a survey and way of evaluating the programs that will take place in 2011, in the hopes that the results will be used when planning the events of 2012.

I think that the most interesting part of the project will be the creation of evaluation method, and I know I will draw on my experience in DUX as a way to design the survey (or whatever method we use). I have had the opportunity to administer usability tests, while also having the chance to be a “guinea pig” on a number of occasions and although this will be a different type of evaluation, I still think that it will be an interesting comparison to what we do here in DUX-understand how people approach our website, what they gain from it, and how we can make it better.

In an attempt to become better acquainted with all the different committees, roundtables, and divisions that ALA encompasses, I attended a number of committee meetings and discussions on Saturday. During one of these sessions, I heard someone say “I worry about the digital divide-the division between rich and poor, those who can afford iPads and iPhones and the data plans, and those who cannot.” I found this extremely interesting, because so often we hear of “digital natives”-those who have always experienced the internet and know how to use computers and the like, but far less frequently is the idea of those who are not exposed to technology because of a social class difference explored-particularly in an academic setting. The exchange reminded me of a similar experience I observed over the summer. There was an X153 class that came for instruction. X153 is class for incoming freshman that are often the first generation of college students, and may be unfamiliar with newer technologies-such as Twitter. One IA used Twitter as a learning tool, which, while an excellent idea, is difficult for students who have never heard of Twitter, don’t have an account, and don’t have any idea how to use it. However, would these students have been exposed to Twitter if the IA not used it during class that day? Or was there a better platform for the IA to use Twitter? As an IA and a DUX-or, I often find myself wanting to show students I teach new technologies or ways of searching that they may be unfamiliar with-but in 50 minutes, how do I bridge that gap? I’m hopeful that my time at IU, combined with my new experiences with ALA and as an EL, will help me understand the best way to approach such a difficult topic.