I was presenting in the first set of parallel sessions, which was great as it meant my nerves didn’t have a chance to catch hold and I had a way in to network with strangers (not my strong point) for the rest of the conference. I was presenting in the Richard Burton Lecture Theatre (intimidating or what!?) with Irma Klerings from Cochrane Austria. Irma and colleagues looked at the impact of searching in fewer databases in rapid reviews and compared outcomes found from a rapid or a full systematic review. Her three conclusions were:
- If decision makers are willing to accept less certainty and a small risk for opposite conclusions, some abbreviated searches are viable options for rapid evidence synthesis.
- Decisions demanding high certainty require comprehensive searches
- Impact of abbreviated searches depends on type of intervention, “size” of the topic, and definition of “changed conclusion”
She also found that limiting number of databases was more suitable for reviews of pharmacological interventions and public health topics were more liable to change conclusions if less databases were searched – very pertinent information for us at the School.
As was also drawn out in the Q&A afterwards…
Many of the references indexed in the databases but which were not retrieved by the search did not have abstracts — i.e. they are there, but the information that allows us to find them is not. Discoverability a problem. #EAHIL2018
— Librarian Errant (@librarianerrant) July 11, 2018
Her study is just published, see Nussbaumer-Streit, B., et al. Abbreviated literature searches were viable alternatives to comprehensive searches: a meta-epidemiological study. J Clin Epidemiol 2018; 102: 1-11. doi: 10.1016/j.jclinepi.2018.05.022.
Then it was my turn. I presented on work myself and Kim Coles did, looking at the quality and reporting standards of systematic reviews published by School authors.
@falkie71 about the quality of SRs of the publications of the @lshtm #EAHIL2018
— Alicia Fátima Gómez (@fagomsan) July 11, 2018
What Alicia fails to mention, is that our work was inspired by work she presented at EAHIL 2016. It was great to meet her and compare notes.
As you can see from Amanda’s tweet, the stage and my slides were huge. The lecture theatre was also really full, which was terrifying and pleasing in equal measure.
Presenter @falkie71 hoped to undertake research finding her institution conducted systematic reviews to a higher standard than the average but was a little disappointed with the results! 🙊
Thank you so much for humbly sharing your experiences, Jane!#EAHIL2018 pic.twitter.com/Gra0UL4hhh
— Amanda Wanner (@v_woolf) July 11, 2018
Another advantage of being in the first parallel session of the day is that
#EAHIL2018 first cat photo of the conference go Jane
— Jess E-C (@twh1976) July 11, 2018
As the third presenter pulled out due to illness, we ended up having 30 mins of Q&A after my session, which prompted a lively discussion about how we, as librarians, could improve quality and reporting of search strategies – in the venue and also on twitter.
To my mind, is it a fundamental problem with perception: they just don’t perceive searching as science? #EAHIL2018
— Librarian Errant (@librarianerrant) July 11, 2018
Agree. The issue is not the reporting of the search, but that the search process is not robust or rigorous – and researchers don’t recognise it’s a problem #EAHIL2018 https://t.co/tIfkMT86TA
— Paul Cannon (@pcann_LIS) July 11, 2018
It’s all about rigour. Researchers wouldn’t accept less than rigorous methods in lab/field work, why then accept less than that when undertaking a systematic review search? #EAHIL2018
— Paul Cannon (@pcann_LIS) July 11, 2018
Some interesting points raised in @falkie71 presentation and in q’s: there is a real confusion among researchers about how truncation works, even after receiving instruction and guidance. What in particular is making this so complicated to understand?#eahil2018
— Amanda Wanner (@v_woolf) July 11, 2018
A peer-reviewed librarian kite mark for systematic review search strategies – I like that idea! #EAHIL2018
— Paul Cannon (@pcann_LIS) July 11, 2018
Thank you to everyone who asked questions or spoke to me afterwards, i’m pleased so many delegates found my presentation so interesting.
Thank you to everyone who came to my talk at #EAHIL2018 It was great to see a packed lecture theatre and so many people willing to be involved in improving #sysrev literature searches
— Jane Falconer (@falkie71) July 11, 2018
After lunch I listened to Caroline de Brun from Public Health England talk about information needs of health professionals working in humanitarian settings. She carried out work with HIFA (Health Information For All), which is published on the HIFA website (click on the publications tab). Ruth managed to capture a useful summary slide.
Some interesting observations on evidence in the humanitarian sector #EAHIL2018 pic.twitter.com/TKIRYvS7PL
— Ruth Jenkins (@Kangarooth) July 11, 2018
This will provide useful background to the ongoing discussions we are having in the Library with Public Health England around how we can support the UK Rapid Response Team. Caroline also mentioned a resource which was new to me: Medbox. I’ll have to check it out and maybe add it to our list of databases and literature sources.
Next up was Sarah Young from Carnegie Mellon Univ talking about how she supports systematic review capacity building in LMICs.
Welcome to Sarah Young, her first @EAHIL conference #EAHIL2018 pic.twitter.com/YB0WM4g168
— Mala Mann (@SysReviews) July 11, 2018
Although her work was centered in LMICs, she made the good point that resource limited settings can be in any country if a researcher doesn’t have access to a well stocked library. Researchers in LMICs often need to write their own systematic reviews as most are not suitable for their situation, although Sarah highlighted that Cochrane and the EPPI centre are now including low-resource settings in their reviews, which is excellent news.
#EAHIL2018 some initiatives for capacity building for #sysrev in LMIC pic.twitter.com/Os0yMstK0M
— Jane Falconer (@falkie71) July 11, 2018
Sarah and colleagues visited researchers in Africa (sorry, I didn’t note down where) and conducted face-to-face training over five afternoons. Not surprisingly, they found this wasn’t long enough.
Interesting slide from Sarah Young #EAHIL2018 #systematicreviews pic.twitter.com/eLlyYH85zf
— Mala Mann (@SysReviews) July 11, 2018
She ended her talk by calling for a mentorship scheme for librarians in LMICs, something i’m interested in setting up in the UK.
Sarah Young advocating for mentorship in #sysrev librarianship, particularly for librarians in LMICs #EAHIL2018
— Jane Falconer (@falkie71) July 11, 2018
At the end of the day I gatecrashed the Public Health Special Interest Group meeting where I discovered PubMed is going through a complete overhaul of the backend.
New Pubmed 2.0 coming out. Pubmed labs is place to play with it and send feedback. Not all functionality there yet, but @nlm_news keen to get feedback from #medlibs #eahil2018
— Jane Falconer (@falkie71) July 11, 2018
#eahil2018 pubmed is changing, have a look at beta versionhttps://t.co/noq7rHYp3p feedback to them
— Alison Bethel (@AlisonBethel) July 11, 2018
Day two started with a wellbeing morning. I opted to visit Cardiff University special collections and was fascinated by an old European atlas – in particular the map of Scotland.
Saw an old European atlas (can’t remember the date) but it shows us highlanders are hardy folk – no shoes and semi-naked in the north of Scotland. Putting Orkney in a box is also nothing new. #eahil2018 #rarebooks pic.twitter.com/AUVRHll9yr
— Jane Falconer (@falkie71) July 12, 2018
But no rest for the wicked, after lunch we were back to the conference proper.
Alicia Gomez-Sanchez and Rebeca Isabel-Gomez gave a presentation called ‘Rapid Reviews drive us crazy!’ They tried to find if there was any consensus in recommendations on how to do one. As you can see from the photo in the tweet below, the answer was ‘no’.
Table shows lack of consistency from guideline producers for conducting #RapidReviews #Rapidreviewsdriveuscrazy #EAHIL2018 pic.twitter.com/G0UH8W9xWD
— Morwenna Rogers (@Morwenna73) July 12, 2018
Their conclusions show that this field is a bit of a methodological mess.
Conclusions and reflections of the current state of #rr standards #EAHIL2018 pic.twitter.com/ATe2GMB2Gq
— JolandaE (@jolanda_hains) July 12, 2018
Next up was – Floriane Muller & Pablo Iriarte who tried to determine how much of PubMed was held in full-text by their library and how much was available open access. To do this, they downloaded PubMed – yes, you read that right – and used PMIDs to compare their local holdings and open access holdings.
Anybody interested to benchmark? @pablog_ch @Flor__Mu #EAHIL2018 pic.twitter.com/b7P7LAEWkX
— JolandaE (@jolanda_hains) July 12, 2018
#eahil2018 25.2 percent of content are available in open access in Pubmed. Embargo dates impact on this
— Jess E-C (@twh1976) July 12, 2018
To be honest, both were a higher proportion than I thought.
The final presentation of the session was Andrew Booth asking ‘How many search results and enough… and what can we do about it?’
I found this really thought provoking as it tied into questions I was asking in my presentation.
.@AndrewB007h highlighting the requirements of efficiency for searching not just completeness. #EAHIL2018
— Jane Falconer (@falkie71) July 12, 2018
His argument was that as well as being complete, we need to be efficient and came up with his own metric – number needed to read- to measure this. This is the total number of items screened divided by the number included in the review. He argued that as there is no benchmarks for searching, we all do it slightly differently. Therefore the individual/institution carrying out the search has as big an impact to the results retrieved as the topic.
.@AndrewB007h argues that the person/institution doing the review is as important tothe results as the topic. No consistency amongst us all. #EAHIL2018
— Jane Falconer (@falkie71) July 12, 2018
He found huge variation in the number needed to screen, from 3306 (!!!) to 7, and that systematic reviews with an information professional doing the searching typically ends up with 3x more results to screen.
#EAHIL2018 @AndrewB007h pic.twitter.com/OxRBB37aim
— JolandaE (@jolanda_hains) July 12, 2018
@AndrewB007h used a small sample of only 5 articles per institution, so the numbers could be a bit skewed #EAHIL2018 pic.twitter.com/JXMkpwk4bA
— JolandaE (@jolanda_hains) July 12, 2018
Unsurprisingly to me, he found that public health reviews had much more references to screen on average, and a higher number needed to read than on average. Put that together with the findings from Irma on day one, where she found that public health reviews were more likely to draw inaccurate conclusions if a rapid review methodology is used, and we can see where there may be issues.
His conclusions give food for thought.
Implications: no broad agreement on norms @AndrewB007h #EAHIL2018 pic.twitter.com/Y8Eul3gm87
— JolandaE (@jolanda_hains) July 12, 2018
To take to the gala dinner tonight: @AndrewB007h #EAHIL2018 pic.twitter.com/qeFzsEG9CA
— JolandaE (@jolanda_hains) July 12, 2018
Note, that if it’s filled in correctly, the PRISMA Diagram should document this process.
Friday started with a morning Wikipedia editathon. This is something i’ve always fancied having a go at, but have been a bit hesitant to try. It’s really easy! Thinking of putting together something at LSHTM now.
Here’s Ruth from Edinburgh Uni telling us why they decided to introduce it there.
“We all know our students are using Wikipedia & we’re all using it ourselves so I think being familiar with it is really important and a really important part of information literacy”#EdinburghUni librarian @kangarooth is running a Wikipedia editathon at #EAHIL2018 this morning. pic.twitter.com/ZKgOdFdBZX
— Ewan McAndrew (@emcandre) July 13, 2018
And here’s me having a go.
Big thanks to all the attendees at our #EAHIL2018 Wikipedia micro-editathon workshop.
14 editors, making 39 edits, on 11 articles 🤗 hope you’re feeling inspired to run your own events in your institutions and countries! pic.twitter.com/Of9MTAy6NI— Ruth Jenkins (@Kangarooth) July 13, 2018
Finally, I went to hear Anne Brice talk about Knowledge Management in Global & Disaster Health. This is an area we’re increasingly involved in, as noted above, so I found this really interesting. Sendai has now made it onto my ‘to read’ list.
.@annebriceuk asking what #medlibs are doing to support #Sendai #EAHIL2018 pic.twitter.com/2F0HBCWPe9
— Jane Falconer (@falkie71) July 13, 2018
See https://t.co/AswKIoTAew for more info #eahil2018
— Jane Falconer (@falkie71) July 13, 2018
The conference ended with the announcement that my presentation won Best Presentation for a first time attendee.
Best 1st time attendee oral presentation:⁰Quality & reporting of literature search strategies in systematic reviews published by London School of Hygiene & Tropical Medicine affiliated authors: an assessment using PRISMA, AMSTAR & PRESS criteria
Falconer J, Coles K— EAHIL (@EAHIL) July 13, 2018
I was a bit flabergasted and rather chuffed. Thank you to the EAHIL organisers for voting for me and for the prize.
The best winners :) and a moment for a photographer! #eahil2018 pic.twitter.com/guF2gJMT70
— Paulina Milewska (@MilewskaPaula) July 13, 2018
As I tweeted at the end…
Thanks to everyone at #eahil2018 for making me feel so welcome, I had a great time. But now I’m knackered #tired
— Jane Falconer (@falkie71) July 13, 2018
A round up of blog posts is being published and will be added to as they become available.
#EAHIL2018 Blog posts round-up!https://t.co/FfKhGqLpq9
Thanks to blog posts shared by delegates, we can catch up on parallel sessions or workshops we missed & get new insights.
As we find more posts, we will update the list & if you don’t see yours here, please tell us!— EAHIL (@EAHIL) July 23, 2018
All the slides etc will be available on the EAHIL website next week.