Written by Anne-Marie Viola, Metadata and Cataloging Specialist
Last week ICFA announced the public launch of its online catalog, AtoM@DO, an implementation of the archival collection management system sponsored by the International Council on Archives, ICA-AtoM. And already the site has seen 340 visits!
Following a soft launch in December during which the system was made available within the Dumbarton Oaks community, ICFA undertook two weeks of usability testing last month in preparation for the public launch. Although everyone was eager to finalize our content and make the site public, we decided to delay the launch for testing after two members of the team found themselves debating the impact of how titles are formulated: specifically, what elements should be included and in what order. In addition, we had concerns about how the search engine handled diacritics – terms with accent marks aren’t findable without the correct character – and we wanted to understand how big of a problem this would be for our multilingual scholars.
Usability testing can be performed as a first step to identify problem areas on the current website and/or as a method of confirming the usability of the site following any design changes. It can be done with any number of users (even one may be helpful), but is most beneficial when performed by a variety of user types, including new and returning.
A usability study tests the effectiveness, efficiency and satisfaction with which users accomplish identified goals on a select website. This involves observation of typical users attempting to complete a common set of typical tasks on the website in question.
Planning our tests began with a general discussion of Goals; the team compiled the following list of what we wanted to achieve through testing:
- Determine how users access the site and identify the primary method (e.g., direct type-in, intranet link, or email announcement)
- Test efficacy of homepage text
- Identify search engine problems, specifically regarding diacritics
- Understand how users search by name
- Assess impact of inconsistency in title format and elements
- Confirm if users understand menu labels
- Determine if users understand how to navigate within a collection/use AtoM’s Context Tree functionality
- Determine if users understand descriptive elements in ISAD(G)
- Determine if users know where to look for access restrictions
- Determine if users know where to find the preferred citation for a collection
- Determine if users understand how to request material for research
- Determine how users prefer to get help
Our initial meeting also involved identification of AtoM@DO’s target Users, a discussion of user profiles representing each group, and then development of a list of common tasks each user would need to perform in using the site. The first obvious group of users was our own department, as administrators. A handful of us had been building the site and using the system for the last few months, but we recognized that other ICFA staff would need to be able to assist researchers and that there would be future hires as well. Secondly, we identified our target end user: researchers. As we talked about who this user group included, we identified two distinct subgroups: fellows and readers, and characterized their differences (e.g., familiarity with Dumbarton Oaks, physical access to ICFA and its staff, expectation of online resources, etc.). Lastly, we added non-departmental staff – like our Byzantine Reference Librarian and our museum staff – that may use AtoM@DO for reference or research purposes.
In assembling profiles of each of these personas, we considered familiarity with our collections and Dumbarton Oaks, general expectations and comfort with web-based tools, typical search preferences, research expertise, experience with archival collections, and foreign language fluency.
Between these two lists – of Goals and Users – we came up with the third: Tasks. We identified both general tasks any user would need to accomplish to use the site (e.g., understand to which collection a result belongs, identify its repository, request material, etc.), as well as tasks specific to each user group (e.g., researchers’ need to determine availability for reproduction and staff’s need to identify collection scope). After compiling the list of tasks, I developed a scenario for each one, i.e., a situation that would create the need for a user to accomplish the specified task. I also noted how we would know if a task had been completed successfully and the various methods by which it could be achieved. In the end, we had a list of 15 tasks.
Context tree navigation
Task: Select a record from authority list or search results and tell me to which collection does it belong.
Success: User selects record and then notes context tree menu and names collection.
Main: User clicks on collection-level record.
Alt: User selects child record, hovers over collection title in context tree menu and reads aloud; or user reads collection name from search results.
Scenario: Researcher has identified research materials of interest and wants to know more about the collection to which they belong.
Now that we had our list of tasks to test, we needed to recruit participants. As access to the site was still limited to on-campus users, we decided to limit our testing to staff and fellows. We solicited participation via a brief email describing the test, which also included a link to a 10-question survey that we built through Survey Monkey to confirm that respondents fit the profile of one our user personas. As an incentive to sign up, we also offered a $5 Starbucks gift certificate in appreciation of participants’ time and feedback. In the end, we solicited nearly a dozen responses, of which we scheduled eight, including four fellows and four staff.
With the participants lined up, all that was left was to prepare for the actual testing. For this, I drafted a script, which included each of the tasks, along with an explanation of why we were testing and how the test would be conducted. For this, I used language from Steve Krug’s “Rocket Surgery Made Easy”. The script ensures that participants are directed to complete each task in an identical fashion. I also created task cards, printouts of each task for users to reference during the test. And because we had decided to record our sessions, I also prepared a consent form for participants to sign.
Since we would be testing users in my office on my Mac, we elected to use Silverback for recording each session. Silverback (Mac only) records both what’s happening on the screen, as well as the user’s face and audio. The software is helpful, easy to install, simple to use, and best of all, offers a free 30-day trial. With the help of an observer to mark tasks, I was able to easily go back after each session and make note of how long each participant took to complete each task.
In the first week, we tested five participants. I limited testing to one participant per day whenever possible to ensure I could transcribe and put down any thoughts before tackling the next session. For each session, I recorded time on task, whether or not they were successful, and any suggestions or other feedback offered. Within the first few sessions, it was immediately apparent where we had problems. By Friday, I had each session transcribed and a list of our findings for the team to review. Interestingly, our youngest participant was “most successful”.
Of the problems that we noted, we categorized them into one of three types: problems that we could address immediately, problems that we could attempt to address through instructions, and problems that we would discuss with the developers. Those that we attempted to resolve with an immediate in-house fix were mostly around labeling or other issues with language on the site.
PROBLEM: In five tests, only one participant was able to identify how many collections are represented in AtoM@DO in less than a minute and a half.
SOLUTION: Change Browse menu option from “Archival records” to “Collections”
Other immediate changes that we made before we commenced testing in Week 2 included changing links out to the Dumbarton Oaks website on the About page for each repository to links to AtoM@DO’s repository records – to direct users further into the system rather than away from it – and a revision of the homepage text to make it less promotional and more instructive. This included moving two key pieces of text to where we had identified users were looking for information.
Week 2 involved three more tests, each of which confirmed the efficacy of our changes and the unresolved findings of the first week. By the conclusion of testing, we regrouped and decided on two courses of action before we would go public with the site. First, based on feedback from a number of participants that they were surprised to find only 10 collections and not records for all of ICFA’s holdings, we decided to create collection-level records for as many collections as we could, even those not yet fully cataloged. This involved creating 27 additional records with ISAD(G)’s minimal-level description. Second, to address those problems that could not be resolved with labeling changes, we created an AtoM@DO-specific Help page, to which we directed the Help link in the system and made available through our website. This also enabled us to include screenshots and other helpful images.
Check back next week for a list of the bigger problems that we identified and the feedback we are preparing for the developers, Artefactual Systems. We have collaborated with this team of archivists and programmers previously to sponsor the development of functionality enhancements to ICA-AtoM and hope that our findings can contribute to future development.
More usability resources:
Reblogged this on Simone Borsci.
Pingback: AtoM@DO: Findings from Usability Testing | icfa