Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

...

TimeItemWhoNotes
10:07AMBlacklight Sprint UpdateEmily

Just finished a sprint - 2 week development cycles

  • Focused on search functionality - both basic and advanced searching
  • Making customizations and changes to the boosting structure in Solr (the underlying index that Blacklight will display content from)
 10:10AMTest Queries for RelevancyEmily et al

Listing test queries /wiki/spaces/SAD/pages/87326817 and notating the expected results

  • Can then hand off to developers who can modify boost settings so that those results get to the top
  • Example: known item searching - this is not working so well in Primo
    • Complicated by different versions, adaptations, editions, etc.
  • First step is to document the different types of queries and the results that we expect
  • What are some examples of a conflicting results?
    • For some items, there might be multiple records for different versions (i.e. physical, electronic, video, etc)
    • Journals - most have a primary match, but there are variations on the name (i.e. Journal of Physical Review A, Journal of Physical Review B)
    • Author search - does the patron want books by that author or about that author
    • Title search - patron does a search on "Maus" as a title, but there are a lot of authors whose last names have "Maus"
    • What would we expect when entering a search for "Philadelphia"
    • Emily will put together a small team to review these queries
  • Do we want to have test queries that we know will be successful - focus on more streamline queries or focus on outliers?
  • Would it also be helpful to think about what metadata in the record could help the patron drill down? How many facets away?
    • Are there ways other than ranking that might be helpful to patrons
  • We also have Summon raw search data that we can use to draft some queries
10:20AMPrimo AnalyticsEmilyPrimo Analytics tool - similar to Alma Analytics tool
  • Popular search report - top queries in the system - some might be librarians/library staff running tests
  • Sample available data
    • In Analytics, you can see what facets are being used
    • Can also get user group if the person is logged in
    • All search data is anonymized
    • Could possible eliminate library staff user group from the logged in searches
10:35AMUser SurveyCynthia et al

Put the questions into Survey Monkey

Question of "Collections" - we highlight SCRC, but we have other collections as well

  • Should we make this broader?
  • Listed Special Collections Research Center
  • We're already making a lot of assumptions about what we think of as important.

Help with Research

  • This is too broad - could cover contacting a librarian, research guides, libchat
  • We can ask the testers how they interpret "Help with researchers"
  • Split this into two items: "Contact a librarian" and "Research Guides"

Testing the survey

  • Will there be a debriefing with the testers to ask if these options make sense?
  • We talked about testing with student workers and we can still do that
  • Instead of using the ranking, we can use the scale - this might be less confusing to users
  • Survey is available here: https://www.surveymonkey.com/r/W3FVC5P

Added/modified questions:

  • Added: Last time you visited the library website, what did you do while you were there?
  • Added: Please enter your email address if you are interested in participating in user testing with Temple University Libraries?
  • Modified: The list of options is now a rating scale, not a ranking
  • Modified: The free form question about other features or services: modified the wording so that it is not a yes/no question

Action items