Conclusion: Innovations by libraries during the early stages of the pandemic are having a long-term impact on library culture and the delivery of services. Even as libraries returned to in-person services, elements of telecommuting,
Results: A total of 209 reviews were found and analyzed. Of these, 28% had a librarian co-author, 41% named a librarian in the acknowledgements section, and 78% mentioned the contribution of a librarian within the body of the review. However, mentions of a librarian within the review were often generic (“a librarian”) and in 31% of all reviews analyzed no librarian was specified by name. In 9% of the reviews, there was no reference to a librarian found at all.
Conclusions: Even among this set of reviews, where librarian involvement was specified at the protocol level, librarians’ contributions were often described with minimal, or even no, language in the final published review. Much room for improvement appears to remain in terms of how librarians’ work is documented.
Two health sciences librarians created search strategies for these questions and searched eleven databases. Both the librarians and the six participants evaluated the search results using a rubric based on PICO to assess extent of alignment between the librarians’ and requestors’ relevance judgments. Intervention, Outcome, and Assessment Method constituted the most frequent bases for assessments of relevance by both librarians and participants. The librarians were more restrictive in all of their assessments except in a preliminary search yielding twelve citations without abstracts. The study’s results could be used to identify effective techniques for reference interviewing, selecting databases, and weeding search results.
Conclusion: The deliberate inclusion of a health sciences librarian into the doctor of pharmacy curriculum can benefit faculty and students. Opportunities for collaboration are available throughout the curriculum, such as providing instruction for database utilization and supporting the research activities of both faculty and student pharmacists.
The FAC (Focus, Amplify, Compose) rubric for assessing medical students’ question formulation skills normally accompanies our Evidence Based Practice (EBP) training. The combined training and assessment rubric have improved student scores significantly. How much does the rubric itself contribute to improved student scores? This study sought to measure student improvement using the rubric either with or without a linked 25-minute training session. To read the full article, choose Open Athens “Institutional Login” and search for “Midlands Partnership”.
Discussion
Libraries should consider buying quick reference and large, heavy textbooks as ebooks and pocket-sized or shorter, single-topic titles, in print format.
Although search engines sometimes highlight specific search results relevant to health, many resources remain underpromoted.5 AI assistants may have a greater responsibility to provide actionable information, given their single-response design. Partnerships between public health agencies and AI companies must be established to promote public health resources with demonstrated effectiveness. For instance, public health agencies could disseminate a database of recommended resources, especially since AI companies potentially lack subject matter expertise to make these recommendations, and these resources could be incorporated into fine-tuning responses to public health questions. New regulations, such as limiting liability for AI companies who implement these recommendations, since they may not be protected by 47 US Code § 230, could encourage adoption of government recommended resources by AI companies.
A growing body of research demonstrates that adapting the popular entertainment activity “escape rooms” for educational purposes as an innovative teaching method can improve the learning experience. Escape rooms promote teamwork, encourage analytical thinking, and improve problem solving. Despite the increasing development and use of escape rooms in health sciences programs and academic libraries, there is little literature on the use of this method in health sciences libraries with health professions students.
"[R]ecently I’ve enjoyed developing our Health and Wellbeing collection, creating some additional resources in the form of wellbeing bags for staff to borrow." This is a short mention in this blogpost - just wondered if it's something we could think about?
Conclusions: For SRs on SMT, we recommend using the combination suggested by the Cochrane Handbook of Cochrane Library, MEDLINE/PubMed, Embase, and in addition, PEDro and Index to Chiropractic Literature. Google Scholar might be used additionally as a tool for searching gray literature and quality assurance.
ChatGPT provides different answers to similar questions based on the prompts, and patients may not have expertise in prompting ChattPT to elicit a best answer. (Prompting large language models has been shown to be a skill that can improve). Of greater concern, ChatGPT fails to provide sources or references for its answers. At present ChatGPT cannot be relied upon to address patient questions; in the future, ChatGPT will improve. Today, AI requires physician expertise to interpret AI answers for patients.
A scoping review to determine how health service librarians instruct practicing clinicians and health sciences faculty in support of their continuing education.
We examined how feelings shape people’s organizing and deleting practices, focusing on four affective aspects: anxiety, self-efficacy, belonging, and loss of control. We hypothesized that these affective aspects would predict the extent to which people utilize organizing and deleting practices. Access via CILIP subscription
On 1 August, Dutch publishing giant Elsevier released a ChatGPT-like artificial-intelligence (AI) interface for some users of its Scopus database, and British firm Digital Science announced a closed trial of an AI large language model (LLM) assistant for its Dimensions database. Meanwhile, US firm Clarivate says it’s working on bringing LLMs to its Web of Science database.
This article provides a brief overview of the capabilities of ChatGPT for medical writing and its implications for academic integrity. It provides a list of AI generative tools, common use of AI generative tools for medical writing, and provides a list of AI generative text detection tools. It also provides recommendations for policymakers, information professionals, and medical faculty for the constructive use of AI generative tools and related technology. It also highlights the role of health sciences librarians and educators in protecting students from generating text through ChatGPT in their academic work.
Conclusion
Grammarly is unexpectedly most effective in detecting plagiarism in AI-generated articles compared to the other tools. This could be due to different softwares using diverse data sources. This highlights the potential for lower-cost plagiarism detection tools to be utilized by researchers.
Short, pithy, and practical article about the uses, and pitfalls, of AI. It includes some helpful suggestions about how to start using it, and some of the issues to look out for.
his study aimed to examine health information seeking attitudes and behaviors in an academic-based employee wellness program before and after health literacy workshops were developed and facilitated by an academic health sciences librarian.
To read the full article, choose Open Athens “Institutional Login” and search for “Midlands Partnership”.