The Next Mirlyn: More Than Just a Fresh Coat of Paint

U-M Library website from seven years ago

The U-M Library website from seven years ago shares some similarities with today's website, in that there is a lot of information given all at once, with little guidance for users.

This post is not going to go over what the next version of Mirlyn will look like. Instead, this post takes a look at how the next generation of U-M Library search will work. Before any changes are made to the current websites, potential revisions will go through a lengthy review process and trial period with Library staff and members of the U-M community.

Mirlyn, Search Tools, ArticlesPlus, etc. all serve to give our users access to the diverse range of materials, and the ability to pinpoint the items they want. Mirlyn gives users a way to search through the immense library catalog, Search Tools aggregates many digital sources, and ArticlesPlus gives users the ability to search through articles from over 100,000 journals.

Though some of these websites are not terribly old in library terms, the World Wide Web has matured and these websites have fallen behind. They are not as easy to use, featureful, or well designed as they could be. On and off for the past few years, University of Michigan Library IT (LIT) has been investigating what the next generation of search websites can be. Included in this has been an exploration of a new kind of search website.

A screenshot of a plain text website that only has two sentences.
The Library and the World Wide Web has come a long way since this, one of the earliest version of the Library home page, from 1996.

Out Of Many, One

Currently we have many websites for many services. This makes sense since each service has special features and unique requirements. However, from the perspective of a new user this makes things confusing. Each search website is a new interface and URL to learn. Even if they learn how to create and filter a search for a book on Mirlyn, they have to learn a completely new interface when they want to find an article on ArticlesPlus. On the main public website we offer an aggregated search in the header, but if you want more results or want to refine results you are sent to a different interface or a completely different website.

However, though the specifics vary, the gestalt of all these search interfaces is the same. There is the ability to type a search term into one or more fields (keywords, title, author, etc.), a variety of facets to filter those results (publication date, format, language, etc.), as well as sumarized and full views of individual records. Instead of focusing on the differences, LIT is investigating taking many of our interfaces—including Mirlyn, Search Tools and ArticlesPlus—and putting them all under one roof with a more unified interface.

With this unified search website a user would only have to go to one URL, select the search they want to do, and perform it. When they learn how to perform a search and filter results on one source they will have learned it for all of them. If they decide they want to find an article instead of a book, the article search will be immediately available and easy to switch to. There will be minor variations between the specific searches when necessary. For example, Mirlyn results often have a call number and a location, while ArticlesPlus results have a link to the article. However, with a unified design scheme, the differences should seem natural for users.

One Ring To Rule Them All...

The first question is how to create this unified search website when Mirlyn, Search Tools, ArticlesPlus all search against different resource which work in very different ways. Mirlyn talks to Solr (an indexing application running on our servers), while ArticlesPlus talks to Summon (a 3rd party resource we can run queries against).

LIT is piggybacking off the work of Columbia University Libraries by starting with the backend to their unified search website—called Spectrum. Inside Spectrum, we define a way to translate between a generic looking search and the details of how to run a search against a specific backend. The interfaces you build just have to interact with the generic search, and the translation layer handles the the generation of the real search results.

The advantage of having this level of indirection is the people building the interfaces don't have to worry about how the search actually happens. If ProQuest—the third party vendor that provides Summon—decides to completely redesign how their API works, we don't have to overhaul the whole website, we only have to update the translation layer. (An API is a standardized way for two or more programs to talk to one another.)

It is also worth noting that Spectrum is built on top of Blacklight, which is the same search interface used by the Hydra stack that the Library will be using on other projects like the upcoming research data repository - Deep Blue Data.

Single Page Application

Step two of building a unified search website is giving it an interface.

Currently, almost all the work of generating the search interfaces happens on the server. You hit search and the search parameters are sent to the server which doesn't just run the search, it also organizes the results, re-renders the entire interface, and sends the newly rendered website to you. When you request the next page the server can go through that process all over again just to generate the next page. Each time you do this, the browser does a page refresh.

This isn't terribly efficient since a lot of the interface remains the same. Instead of doing this, the next generation of search will be what is known as a "Single Page Application". When you first load a page it will also load a JavaScript program as part of the web page. This JavaScript application which knows how to talk to the Spectrum application running on the server. When you perform a search, instead of refreshing the entire page the JavaScript will ask the server for matching search results. Once it receives them, the JavaScript will render them onto the screen.

Better, Stronger, Faster Than Before

As far as any user is concerned, it  just an ordinary website. There will be links and buttons to perform searches, the browser's back button will work as expected, and you can bookmark whatever page you are on to go back to in the future. That being said, there are a number of advantages to having a single page search application.

The JavaScript application can load multiple pages of results at all once, so the first page will take the same amount of time to load, but hitting the "next page" link will instantly render the next page of results because it already got that page from the server. We can also load full, or nearly full, records with the search results. This means that full record pages would load near instantaneously. When there is too much information in the record to preload it, the application can render a partial record page instantly along with a loading icon, then display the rest of the information as soon as it has loaded it from the server.

Having a single page application also means that unlike the current search websites, every new tab you open the website in will be completely independent from the other tabs. Currently, if you select items over multiple pages on Mirlyn or ArticlesPlus, those selections can carry over into other tabs that you have open. With a single page application, every tab can be completely independent.

Into The Future

Though this may seem like a more complicated way to make a search website, it removes a lot of redundant work. Instead of building a new website for every kind of search we want to make, we can just add a translation layer for Spectrum and then specify any minor changes to the generic search interface that would make the new interface better. With a sturdy foundation we can more quickly develop and try out new features, and refine them over time.

This also gives us a lot more flexibility to change things in the future. Because the single page application and Spectrum only talk to one another with a generic search API, we can completely redesign or even replace Spectrum without having to modify the interface. As long as the new backend talks using the API, the single page application wouldn't know that anything has changed. Similarly, when we want to redesign the search interface, we don't have to recreate how searches are performed because all those functions are located in another application.

As a side benefit, other websites can talk to this generic search API. Whenever we want to embed Library search into another application we don't have to reinvent the wheel, we can just ask for results from Spectrum via the same API that the unified search interface uses.

All of this will hopefully make it easier and potentially faster to improve our search website well into the future.