ELUNA 2019

Notes from Ex Libris Users of North America 2019

Atlanta, GA

Narrative

From Tuesday April 30 through Saturday May 4, I attended the 2019 meeting and conference of Ex Libris Users of North America (ELUNA), which was held at the Hilton Hotel in Atlanta, Georgia. I did so in order to present on some co-authored scholarship that I have been directing with some other colleagues in the CSU and to ensure that I was aware of the most up-to-date information about the Primo discovery software. Below are my session notes, which are of varying length depending upon how informative I found each session. ELUNA is (theoretically) organized in ‘tracks’ so that sessions pertaining to the same or similar products are spread out over the course of each day and that people interested in only one aspect of Ex Libris’ products, such as Primo, can absorb all the relevant content; however the scheduling of the Primo track was definitely sub-optimal this year so I was forced to make some hard choices about which sessions to attend. Fortunately, I was able to download all the relevant slideshow files for later review. They are located at: X:\Gabriel\ELUNA2019\ELUNA_Primo_Content\

If you are interested, find my (often cryptic) notes below with the detailed session descriptions. Let me know if you have any questions.

TUESDAY, APRIL 30

6:00PM - 9:00PM OPENING RECEPTION – GEORGIA AQUARIUM

Nice aquarium, larger than ours in Long Beach, at least prior to the recent renovations there. My co-presenters and I had not had the opportunity to practice our presentation prior to the conference so after hitting the open bar we went over our slides.

WEDNESDAY, MAY 1

9:00AM - 9:20AM - ELUNA 2019 OPENING SESSION

Opening session focused on big data and AI and how it is spreading rapidly and being adopted by some libraries already. Nothing groundbreaking but everyone should be paying attention to this trend.

9:20AM - 10:20AM – EX LIBRIS COMPANY UPDATE AND STRATEGY

They think Alma has collected enough data that they can now use it to improve the product based on user behavior. To that end they have introduced an artificial intelligence assistant called “Dara”. More on that later.

Lots of hype about 4th industrial revolution, shout-outs to campuses that are full stack Ex Libris product users. Announced new “provider zone”, similar to Community Zone, where publishers can improve their metadata in the Central Index and for basic bib records.

Pledges to increase transparency in their R&D process. They have hired 20 new people in their “customer success” department - handling library reported problems. On average 30% of every new software release of Alma and Primo are improvements that come from the Ex Libris Idea Exchange. It really is important for people to vote on there. https://ideas.exlibrisgroup.com/

Coming soon: “App Center” a collection of extensions and on-platform apps that can be easily added to Alma/Primo, easier implementation that the existing Developers Network. Introduced yet another new product in their never-ending quest to monopolize all things related to libraries: Rialto, which basically does the same things as GOBI.

10:45AM - 11:30AM - INSIGHT OR APOPHENIA? THE PITFALLS OF DATA FOR DATA’S SAKE

AI, Google, has deep learning image searching that can basically fine pictures of anything in anything now, neat for art but bad way to go about data analysis.

Big data is actually a giant mess; lots of implicit bias in data collection that can be very hard to detect if you didn’t collect the data, which is the situation libraries are in almost all the time. So many algorithms have high false positives and high false negative rates but we often focus on the accuracy and have a false sense of how well things based on big data work.

Garbage in, garbage out you can’t mine gold out of crappy data - e.g. Primo Analytics sign-in data is a binary variable when actually people will do things in a session before they sign in so calculations based on this will be inaccurate.

Data collection should be intentional - we need this for X purpose, don’t try to hoover up everything because you will probably do it poorly and won’t be able to get the insights that you want.

We should apply GDPR to everyone. Personally identifiable information is a hot potato, risky in case of hack, we should collect with DATA MINIMIZATION in mind. In line with GDPR, we must be transparent - need a privacy policy that can be easily found which lists all the data collected. As libraries we should be better than Silicon Valley et al.

No amount of data will get you out of talking to people - data never speak for themselves. Self-reported data is well known to be somewhat or very inaccurate, so you can’t rely on that just alone. You MUST use mixed methods to accurately understand the world. A la Marie Kondo, ask: does this data spark insight? If it doesn’t and contains possible PII then get rid of it.

Q&A
Talking to people can be hard what does she recommend? - Guerilla usability testing, mini-ethnographic observation, pop-up focus groups.

11:45AM - 12:30PM - FINDING YOUR WAY THROUGH DATA WITH VALUE STREAMS

What is value stream? A term from lean methodology, a focus on creating more “value” with same or fewer resources. The value stream represents all the things we do to create value for our uses.

At Loyola Chicago they have 1 instance of Alma/Primo but 4 libraries /campuses, requires a lot of coordination and training.
They did a user roles (i.e. permissions) project to help staff understand their roles in serving end users. Determined the types of profiles they needed to create based on position descriptions then delete all current roles and reassign them based on their analysis of the workflows and position descriptions. This project has streamlined onboarding of new staff which happened recently for them and they also discovered that there was a lot of role/permission misalignment in many existing staff capabilities.

Note: Role Area and Role Parameter is not queryable in Alma Analytics

The dilemma: have lots of very specific profiles? Or fewer but more inclusive and capable profiles? They went with the latter option. Is it actually necessary to tailor permissions so granularly and securely? They think risk of giving a couple people additional capabilities as part of a role template outweighs the granular adjustment of permissions for each person. Pretty disappointed in this session since it was billed as having relevance to Primo but didn’t.

Q&A

  • Did they get any pushback from staff who now had more permissions and capabilities about how since they had more abilities now they might be asked to do more work, possibly outside their job description? Not yet. They have very collaborative environment and staff are not afraid to talk to management.

1:30PM - 2:15PM - IF YOU BUILD IT, WILL THEY COME?: NATURAL EXPERIMENTS WITH SPRINGSHARE’S PROACTIVE CHAT REFERENCE

My session. It was well-attended.

2:30PM - 3:15PM - BEST PRACTICES FOR SUCCESS WITH DISCOVERY

ExL advice for maintaining eresources. Need to monitor listservs, check knowledge center when there is an issue, review configuration options, and share information widely internally. Use the proper escalation path (found in the slides) to bump salesforce cases up if not getting support needed.
Noted that the resource recommender has many open fields for customization - you can recommend basically anything, we don’t use this at LB nearly as much as we could. Noted that the bX recommendation system uses global data from all Primo users, no data source customization.

All workflows should be documented in case someone is hit by a bus or leaves - note: I have not done this at CSULB.

3:45PM - 4:30PM – EMPOWERING THE LIBRARY

Dara is an AI designed to help libraries find better workflows that might not be obvious to them.
Yet another new product is on the way! - unnamed but is about resource sharing and apparently will compete with ILLiad/Tipasa.

Big deal the Summon Index and Primo Central Index will be merging. This will affect amount of content available to us and material types. Details here: https://knowledge.exlibrisgroup.com/Primo/Knowledge_Articles/The_Ex_Libris_Central_Discovery_Index_(CDI)_%E2%80%93_An_Overview We will be getting moved to the new metadata stream in the first half of 2020.

4:45PM - 5:30PM - SPRINGSHARE INTEGRATION WITH EX LIBRIS TOOLS

All public facing LibApps products can integrate with Alma/Primo in some fashion.

How to get Libgudies into Primo - DO NOT use the method recommended by Northwestern librarians on the Ex Libris developers network - instead do Libguides > tools > data export > use the OAI link. With the LG OAI link, go to PBO and use the OAI splitter, use pipes to pull the content, three pipes: full harvest, delete, and update. Use the scheduler to run them automatically. Some normalization may be required.
Recommended to hide display of publication date of all LA content in Primo since it grabs the originally published date, not the most recent updated date.

If you assign different dc.type values to the LG content e.g. course guide or research guide, then that is what displays as “material type” in Primo.

Other method to get LG content into Primo is recourse recommender. Can do either of these for e-reserves, can do it for the databases A-Z records.

Libanswers doesn’t have OAI support but does have an API. Lots of libraries are using a dedicated Libanswers queue to handle primo problems reports, contact Ryan McNally at Northeastern University for details.

LibInsight is COUNTER5 and SUSHUI compliant, can ingest Alma circ and e-resource usage data.

THURSDAY, MAY 2

9:00AM - 9:45AM - SEAMLESS REQUESTING FROM PRIMO TO ILLIAD: A 5-SECOND DEMO

With APIs there is actually no reason to use webforms to talk to ILLiad anymore. They are using the TAGS section in Primo to build their customization, TAGS is a Primo directive that appears on every type ad view of primo pages.

Design principles: if you make requesting easier people will do it more, all the data needed to plac requests is in Primo already, needs to meet accessibility requirements, keeping users in the same application simplifies cognitive load and UX. They knew from google analytics that after people placed requests on the ILLiad form, they usually don’t go back to Primo. But they want people to stay on primo and continue to discover things. impressive demo

From UX studies we know that motion on a page equates to work happening for end users so they wanted something to refresh.
They have a script that populates a database table in between Primo and ILLiad so that requests from Primo go to the intermediate table then later get sent to ILLiad at 5 minute intervals. This allows for requests to be placed from Primo even if ILLiad is down/offline. The end user always sees a message that the request is placed (the request will actually be placed a bit later) unless the intermediate database goes down which it hasn’t.

Since implementing this new system, their ILLiad requests have increased 130%. They now have a little problem with being overloaded with returns but that is a good problem to have.

The code cannot accommodate multi-volume items, they have not figured a way to deal with that.
https://github.com/tnslibraries/primo-explore-illiadTabs

Q&A

  • Do all requests go into ILLiad? Yes.
  • What about new users getting ILLiad accounts? They have a patron load into ILLiad
  • Is their ILLiad locally hosted or cloud and does it matter? It doesn’t matter as long as the ILLiad version has the API.
  • Can this system accommodate users who aren’t in the LDAP/SSO system? NO, to use this, everyone must be in one centralized authentication system.
  • How long do they keep the data in that middle level database table? They clear it out at the beginning of every semester.
  • Where do patrons go to manage the requests? Right now they have to go into ILLiad to cancel requests. But there is a plugin developed by Orbis Cascade (MyILL) that lets the MyAccount area in Primo talk to ILLiad and that is the next step in this project.

10:00AM - 10:45AM - INCREASING SEARCH RESULTS: USING ZERO SEARCH RESULTS & JAVASCRIPT TO REFRAME SEARCHES

Last year ELUNA 2018 people at WSU noted there were some errors in the Zero search results stuff from primo analytics. These presenters pressed on and did literature review and dove into data to categorize zero search results.

They found that they were getting a lot of database name searches in primo so they turned on the resource recommender.

There are searches that show up in the PA that look good, was PCI down? Was Alma offline? No way to know given the PA data because there aren’t full timestamps, only Date indicator.

Many libraries have reported that when they moved from old primo to NUI the numbers (raw number) of zero search results counts decline dramatically - no one has been able to really explain this, Ex Libris has been asked and don’t have an explanation. At UCO they saw big drop moving to NUI and even further decline in number of zero search results queries after turning on the resource recommender.

Categories of zero search hits:

  • Boolean errors
  • spelling/typographical errors
  • nonsense queries
  • library services queries

Came up with the idea of using JavaScript to reformat queries so that they would get results e.g. if someone searched a DOI URL, strip out everything but the actual DOI. Code used is up online. With their new JS implementation - which is on their homepage, not inside Primo - they did see a further decline in number of zero search results. Future plans: parse ISBNs, handle punctuation, implement within Primo proper.

11:15AM - 12:00PM - ARTIFICIAL INTELLIGENCE: IS IT A REAL THING FOR LIBRARIES?

We are definitely in a hype cycle for AI now. What even is AI? Machine learning at present. How can machine learning be brought into libraries? The standards for content delivery are set by silicon valley/amazon etc. now. We might not like it but that is just the world we live in now and libraries need to keep up.

ExL ran a survey about tech adoption and found customers are already thinking very forwardly, though a minority thought machine learning would be implemented in their library in next 10 years. Big data - one of the big benefits of moving to the cloud is that ExL can aggregate libraries data and mine it for insights across all their customers, which was previously siloed and stored locally. ExL anonymizes all data but even so, there are clear trends that can be seen, already using this stuff to power the bX recommender - see ExL whitepaper: http://pages.exlibrisgroup.com/alma-ai-whitepaper-download

New AI tool is Dara, Data Analysis Recommendation Assistant to speed up work in Alma and reduce repetitive tasks. DARA is not trying to do anything that couldn’t already be done by people manually already, but it lowers the bar, it brings superuser knowledge to anyone and does it with much fewer clicks. Through machine learning deduping, DARA can tell when certain libraries are not doing things as efficiently as other libraries.

note make sure we are doing ‘real time acquisitions’

DARAs recommendations are available in Alma in a task list format, they will only display to users who have the permissions/roles levels high enough to actually implement the recommendation change. Coming DARA recommendations: if no item available - prompt for a resource sharing request, cataloging - locate “high quality” records for copy cataloging, generate high demand lists automatically.
ExL admits it still has a long way to go. Nothing that they are doing is “AI” yet, just machine learning for deduplication purposes and basic statistics and logic to determine applicability of recommendations.

Q&A

  • Will they be charging more for Alma AI? No, it will all be bundled in.
  • Will they do AI stuff in Primo? DARA stuff just applies to Alma, Esploro, Rialto and the behind the scenes products, there are machine learning improvements planned for Primo once the new Central Discovery Index goes into production to create relationships between records.

12:15PM - 1:00PM - WHAT ANALYTICS TELL US ABOUT FACET USE

At CU boulder they took away the ‘discipline’ facets in summon since they had UX testing that showed people got them confused with subject headings, now just use LCSH. Comparing OBI (default ExL Analytics) with Google analytics, there are pretty big discrepancies… which to trust? As percentages there aren’t big differences in facet usage compared with on campus or off campus indicating that the librarians aren’t really skewing the data at CU. ‘More” is the 3rd most used facet. They can see that there is a 3 step UX problem where people select a facet but then don’t click on Apply.

At UMich they just use Summon for article discovery. Changed from “Facets” to “filters” supported by much anecdotal evidence and some studies. They use Google analytics too to track. Order of filter groups: pub date, format, subject, language - based on frequency of usage. Not taking into account advanced search Boolean and pre-filtering, only 3.4% of searches in the article discovery area used filters of any kind - very low compared to other places reportedly. Philosophical questions: is filter use good or bad? If relevancy ranking was amazing then filters would be unnecessary except for the most broad searches.

At Yale they use Blacklight powered by Summon API. They have an intermediate bento result for the initial query then people need to make an additional click in order to see more than the select results that appear in the bento. They also use Google Analytics. They implemented a pre-filter of applying scholarly to all article searches (need to actively take off the filter after searching in order to see non-peer-reviewed content) did this change behavior and how people used facets? Since they use the API, they can’t tell from the OBI stats, and there was no data in GA to support the idea that this pre-filter change affected facet usage. It appears that people will basically use whatever.

2:00PM - 2:30PM - LEARNING, RESEARCH, AND CAMPUS SOLUTIONS

Naked sales pitch for various products. Barf.

2:30PM - 3:30PM – CUSTOMER SUCCESS

Did not attend.

4:00PM - 4:45PM - ADD INSTITUTIONAL REPOSITORY CONTENT TO PRIMO: A HOW-TO GUIDE

1st determine how the IR can export metadata. Create scope - should be “collection”. Use PBO views wizard to add scope to the appropriate view.
Create norm rules - def use a template e.g. dublin core. Create pipe, deploy all.
Check results, look at the status and view the output even if it didn’t get flagged for errors. After you get the data harvested and normed correctly, then schedule the pipe to pull regularly from the IR.

Various crosswalks between metadata standards may be required - these can be solved by norm rules. Making norm rules is an iterative process, just keep tweaking - harvesting publishing etc. - until you get it right. See presentation slides for nitty gritty details.

Pro tip - norm things in appropriate case (upper/lower). The error messages are notoriously not helpful… good luck! Getting IR data into Primo usually exposes problems with the IR data - take this as an opportunity to improve things in the IR!

Q&A

  • Where are the norm rule templates? There are some included in the OOTB PBO
  • When should we reharvest? Only if you need to pull in everything again, normal pipe will do the work most of the time.

5:00PM - 5:45PM – CALIFORNIA USER GROUP MEETING (ECAUG)

California is the state with the most Ex Libris customers and we also serve the most students – both of these “mosts” by quite a lot compared with other states. This is a very recent development, was not the case 2 years ago. If we can get our act together we would have massive voting clout in the NERS and Idea Exchange processes on issues that affect all of us. (It is not obvious that there are CA-specific issues around which to rally though…)

FRIDAY, MAY 3

9:00AM - 9:45AM - UNDERGRADUATES AND DISCOVERY

Consider how librarians think vs. how undergrads think about various scenarios - we are very different and it can be hard to “unlearn/unknow” things.

Lots of literature and experience supports the assertion that undergrads use of online search tools is almost entirely driven by class assignments. More and more the expectation is one of efficiency, undergrad don’t think they have the time to take deep dives, when you add in library anxiety to this mix there are reasons for a pessimistic outlook. Good news is that there is research demonstrating that if students can find things in the library that meet their needs, they do prefer these sources over others.

Recommended strategies:

  • teach them when to use discovery and when to use specialized databases
  • in instructional sessions have students compare search tools
  • have students compare google/bing to discovery - internal ExL research and other shows that basically no one uses facets (filters) unless they are taught to do so, after which they use them a lot
  • activity: have students imagine they have to make a search engine then brainstorm how relevancy ranking works
  • activity: evaluation jigsaw to discuss results found in discovery
  • explore critical information literacy
  • do citation trail activities to have students understand citation and intellectual debts

9:55AM - 10:40AM – DIY STACK MAPS IN PRIMO

Always good to seek more input - physical wayfinding signage is incredibly important, not all fixes are catalog-based.

Problems with OOTB RTA display: library name is superfluous if you aren’t in a consortia, in Primo classic there is no physical location information in the brief display. At Lawrence they have very idiosyncratic physical layout of collections. Just displaying the floor number (which our Long Beach NUI primo locations do) is a huge improvement.

They had nice existing floor plan maps posted in spots physically around the library and PDFS of them online. The maps were sufficiently detailed already based on blueprints.

Library name is a required parameter in the PNX and RTA data so if you don’t want it to show to end user need to use simple display:none CSS

In the PBO, the “map” parameter is designed to accept a URL (though this is not obvious from the directions and documentation) and what displays to the end users in Primo is ‘Locate’. At Lawrence they had various location cleanup things they needed to do as part of this project - not applicable to Long Beach. Configuration in Alma: https://knowledge.exlibrisgroup.com/Alma/Product_Documentation/010Alma_Online_Help_(English)/060Alma-Primo_Integration/060Configuring_Alma_Delivery_System/205Configuring_the_Template_for_the_Location_Map_Link

11:00AM - 11:45AM - EX LIBRIS MANAGEMENT Q&A

Nothing of consequence was asked nor were any answers more than vague corporate-speak.

11:55AM - 12:40PM - IMPROVING THE USER EXPERIENCE IN PRIMO BY OPTIMIZING DATA IN ALMA

UT Dallas had many problems with their migration. They discovered a lot of problems with their MARC/bibs after migration - many small and detailed fixes required.
Basically the message here is that MARC record quality still matters a lot, only so much can be fixed by changing the normalization rules if the underlying data is inaccurate or incomplete. We ignore and underfund cataloging at our own peril.

1:30PM - 2:15PM - STUCK IN THE MIDDLE WITH YOU (YOUR OPEN URLS)

Their fulfillment unit wanted a “one-button” request solution, they tried to get ExL to do this in development to no avail.
To get an approximation of a one-stop shop they hid all request options in Primo via CSS and then force all request traffic through ILLiad and they put a request link of every physical resource (some links show up where they don’t want them but this is the only way to get the one stop shop they want).
There were various metadata problems that they “solved” using a man in the middle script that lives on their servers and corrects known problems and cross-references for more metadata to enrich the request before it gets sent to ILLiad.

Various changes in Alma and Primo coupled with move to NUI meant that they needed to revisit their script and talk to all staff involved to see what people still wanted. They ended up rewriting the man in the middle script, see slides for details.
https://github.com/vculibraries/

Unfortunately, we can’t do this at LB because it would cut out CSU+ and Link+.

2:30PM - 3:15PM - MAKING PRACTICAL DECISIONS WITH PRIMO ANALYTICS

Most usability studies on Primo are about the Classic UI, in the very few NUI studies none of them mention Primo Analytics. At NDSU and Loyola Chicago they have just focused on basic counts and usage since that is what is easiest to get out of PA.

Note: single letter searches that show up in the PA queries list come from the journal A-Z list - very few know this.

Recommend that resource recommender tags be updated to reflect most popular queries that might not be showing the desired results. At Loyola they have triggers for the most used newspapers and journals to point users to the databases that have the most either recent or comprehensive content. Maintenance of the resource recommender is a group effort - the experience of R&I librarians who work with the students explains a lot of the queries, need to revisit the triggers periodically, they meet quarterly at Loyola.

Loyola had Classic UI and analytics data from before move to new Primo so they compared classic and NUI post-switch to see how the NUI affects behavior. No big changes…

Usability testing supplements PA data and helps you understand it better but it sometimes doesn’t tell the same “story” as the data.

They use Newspapers Search and in their usability tests no one could correctly search it - Newspapers search analytics will be available at end of May.

Problems with PA:

  • inflated popular search totals; SF case 00552481
  • inconsistent user group categorization, SF case 00628836;
  • inconsistent action totals based on where the date column is included in the query/report

Side note: what the fuck, how much are we paying for this?

  • University of New South Wales in AUS has over 20 open cases with ExL about problems in PA.

Supposedly ExL is going to make PA a focus area of development - bottom line is that it needs to be better.

3:30PM - 4:00PM – ELUNA 2019 CLOSING SESSION

Business report: ELUNA has grown tremendously in the past 5 years, this has created unique challenges for the org and steering committee. ELUNA has revised and clarified their mission statement slightly. 2018 financial summary - ending the year with $315k in the bank. Majority of income comes from the conference, majority of expenditures are on the conference.

Added over 200 libraries from 2018 to 2019.

Next year: Los Angeles! (Looks like U$C has their greased palms all over this because no one from the CalState LA basin campuses knew about it.) May 6 - 8