Patient-facing vaccination literature had a Flesch Reading Ease score of 58.4 and a Flesch–Kincaid Grade Level of 8.1, in comparison with poorer readability scores for healthcare professional literature of 30.7 and 12.6, respectively. MMR scientific abstracts had the poorest readability (24.0 and 14.8, respectively). Sentence structure was also considered, where better readability metrics were correlated with significantly lower number of words per sentence and less syllables per word.
Useful section on most common reasons why searches were rejected - could be part of a checklist for when we're doing peer review. Added to Evidence Wiki.
Dear Editor, The collection and analysis of data from open sources have undergone a revolution in the past decades. While it used to be challenging to obtain sufficient information on a particular subject in the past, nowadays, the real challenge lies in sorting through the overwhelming amount of available information.
"Our approach to AI should first and foremost be positive, optimistic and professional, guided by our ethics and commitment to empowering our users. We can and must take a lead in defining a benign and beneficial future role for AI in the lives of the communities we serve."
This study seeks to understand the information needs of school nurses by conducting a needs assessment survey within the state of Illinois. A survey was disseminated through three statewide professional listservs to determine the types of care-related questions school nurses ask as part of their regular duties and which resources they use to answer those questions. School nurses’ information needs vary widely, and they rely on numerous sources to answer clinical questions. They are responsible for the well-being of hundreds to thousands of children. While they are comfortable searching for information, they are motivated to further develop research skills.
In November, we held our inaugural gathering, welcoming 20 colleagues from various NHS trusts. Included as a reminder / inspiration in case anyone from our team is going to this, or will consider going.
Conclusion: The results of this study show heightened complexity in ChatGPT-generated SCI texts, surpassing optimal health communication readability. ChatGPT currently cannot substitute comprehensive medical consultations. Enhancing text quality could be attainable through dependence on credible sources, the establishment of a scientific board, and collaboration with expert teams. Addressing these concerns could improve text accessibility, empowering patients and facilitating informed decision-making in SCI.
In summary, despite errors and miss rates with the current platform, systematic literature search using AI appears very promising, eliminating hours of human labor while improving search quality. As AI technology continuously evolves, efforts to refine and improve AI-based literature search platforms should be continued.
Results: The 100 systematic review articles contained 453 database searches. Only 22 (4.9%) database searches reported all six PRISMA-S items. Forty-seven (10.4%) database searches could be reproduced within 10% of the number of results from the original search; 6 searches differed by more than 1000% between the originally reported number of results and the reproduction. Only one systematic review article provided the necessary search details to be fully reproducible.
Results
We included 79 studies and identified themes, including question realism, answer reliability, answer utility, clinical specialism, systems, usability, and evaluation methods. Clinicians’ questions used to train and evaluate QA systems were restricted to certain sources, types and complexity levels. No system communicated confidence levels in the answers or sources. Many studies suffered from high risks of bias and applicability concerns. Only 8 studies completely satisfied any criterion for clinical utility, and only 7 reported user evaluations. Most systems were built with limited input from clinicians.
Discussion
While machine learning methods have led to increased accuracy, most studies imperfectly reflected real-world healthcare information needs. Key research priorities include developing more realistic healthcare QA datasets and considering the reliability of answer sources, rather than merely focusing on accuracy.
Today a perceived lasting legacy of the Covid-19 pandemic is that more information literacy instruction is happening online than pre-pandemic, including ongoing adoption of synchronous modes of instruction in course-based and co-curricular contexts, and sustained integration of asynchronous learning resources either in standalone formats or as fundamental elements in what is described as a growing adoption of a more modular, scalable approach to information literacy instruction. At the same time, the role of in-person information literacy instruction has by no means been forgotten, with all OCUL libraries offering a majority of instruction this way by Fall 2022, when pandemic restrictions eased up. However, an ongoing legacy of the Covid-19 pandemic has been lasting changes in how librarians teach, and the nature of collaborative partnerships at work in shaping this information literacy instruction, to increasingly draw from a broader range of modalities to offer students a more flexible learning environment.
Conclusion: Innovations by libraries during the early stages of the pandemic are having a long-term impact on library culture and the delivery of services. Even as libraries returned to in-person services, elements of telecommuting,
Results: A total of 209 reviews were found and analyzed. Of these, 28% had a librarian co-author, 41% named a librarian in the acknowledgements section, and 78% mentioned the contribution of a librarian within the body of the review. However, mentions of a librarian within the review were often generic (“a librarian”) and in 31% of all reviews analyzed no librarian was specified by name. In 9% of the reviews, there was no reference to a librarian found at all.
Conclusions: Even among this set of reviews, where librarian involvement was specified at the protocol level, librarians’ contributions were often described with minimal, or even no, language in the final published review. Much room for improvement appears to remain in terms of how librarians’ work is documented.
Two health sciences librarians created search strategies for these questions and searched eleven databases. Both the librarians and the six participants evaluated the search results using a rubric based on PICO to assess extent of alignment between the librarians’ and requestors’ relevance judgments. Intervention, Outcome, and Assessment Method constituted the most frequent bases for assessments of relevance by both librarians and participants. The librarians were more restrictive in all of their assessments except in a preliminary search yielding twelve citations without abstracts. The study’s results could be used to identify effective techniques for reference interviewing, selecting databases, and weeding search results.
Conclusion: The deliberate inclusion of a health sciences librarian into the doctor of pharmacy curriculum can benefit faculty and students. Opportunities for collaboration are available throughout the curriculum, such as providing instruction for database utilization and supporting the research activities of both faculty and student pharmacists.
The FAC (Focus, Amplify, Compose) rubric for assessing medical students’ question formulation skills normally accompanies our Evidence Based Practice (EBP) training. The combined training and assessment rubric have improved student scores significantly. How much does the rubric itself contribute to improved student scores? This study sought to measure student improvement using the rubric either with or without a linked 25-minute training session. To read the full article, choose Open Athens “Institutional Login” and search for “Midlands Partnership”.
Discussion
Libraries should consider buying quick reference and large, heavy textbooks as ebooks and pocket-sized or shorter, single-topic titles, in print format.
Although search engines sometimes highlight specific search results relevant to health, many resources remain underpromoted.5 AI assistants may have a greater responsibility to provide actionable information, given their single-response design. Partnerships between public health agencies and AI companies must be established to promote public health resources with demonstrated effectiveness. For instance, public health agencies could disseminate a database of recommended resources, especially since AI companies potentially lack subject matter expertise to make these recommendations, and these resources could be incorporated into fine-tuning responses to public health questions. New regulations, such as limiting liability for AI companies who implement these recommendations, since they may not be protected by 47 US Code § 230, could encourage adoption of government recommended resources by AI companies.
A growing body of research demonstrates that adapting the popular entertainment activity “escape rooms” for educational purposes as an innovative teaching method can improve the learning experience. Escape rooms promote teamwork, encourage analytical thinking, and improve problem solving. Despite the increasing development and use of escape rooms in health sciences programs and academic libraries, there is little literature on the use of this method in health sciences libraries with health professions students.
"[R]ecently I’ve enjoyed developing our Health and Wellbeing collection, creating some additional resources in the form of wellbeing bags for staff to borrow." This is a short mention in this blogpost - just wondered if it's something we could think about?
Conclusions: For SRs on SMT, we recommend using the combination suggested by the Cochrane Handbook of Cochrane Library, MEDLINE/PubMed, Embase, and in addition, PEDro and Index to Chiropractic Literature. Google Scholar might be used additionally as a tool for searching gray literature and quality assurance.
ChatGPT provides different answers to similar questions based on the prompts, and patients may not have expertise in prompting ChattPT to elicit a best answer. (Prompting large language models has been shown to be a skill that can improve). Of greater concern, ChatGPT fails to provide sources or references for its answers. At present ChatGPT cannot be relied upon to address patient questions; in the future, ChatGPT will improve. Today, AI requires physician expertise to interpret AI answers for patients.
A scoping review to determine how health service librarians instruct practicing clinicians and health sciences faculty in support of their continuing education.
We examined how feelings shape people’s organizing and deleting practices, focusing on four affective aspects: anxiety, self-efficacy, belonging, and loss of control. We hypothesized that these affective aspects would predict the extent to which people utilize organizing and deleting practices. Access via CILIP subscription
On 1 August, Dutch publishing giant Elsevier released a ChatGPT-like artificial-intelligence (AI) interface for some users of its Scopus database, and British firm Digital Science announced a closed trial of an AI large language model (LLM) assistant for its Dimensions database. Meanwhile, US firm Clarivate says it’s working on bringing LLMs to its Web of Science database.
This article provides a brief overview of the capabilities of ChatGPT for medical writing and its implications for academic integrity. It provides a list of AI generative tools, common use of AI generative tools for medical writing, and provides a list of AI generative text detection tools. It also provides recommendations for policymakers, information professionals, and medical faculty for the constructive use of AI generative tools and related technology. It also highlights the role of health sciences librarians and educators in protecting students from generating text through ChatGPT in their academic work.
Conclusion
Grammarly is unexpectedly most effective in detecting plagiarism in AI-generated articles compared to the other tools. This could be due to different softwares using diverse data sources. This highlights the potential for lower-cost plagiarism detection tools to be utilized by researchers.
Short, pithy, and practical article about the uses, and pitfalls, of AI. It includes some helpful suggestions about how to start using it, and some of the issues to look out for.
This article describes how the library evidence team became part of a wider board project to develop a governance system for Apps. It also describes how the skills of librarians can be developed to work in this area and raise the profile of the team within the board.