Artificial Intelligence (AI) Questions and AnswersThe ISMPP Artificial Intelligence (AI) Task Force provides answers to questions received from ISMPP members. Below are questions-and-answers from the AI plenary session and workshops at the ISMPP 20th Annual Meeting, April 28-May 1, 2024, and the public ISMPP U webinar - The AI Evolution: Medical Communications in the Digital Age - held on June 12, 2024. Email [email protected] to submit your questions related to AI in medical publications/communications. AI in Medical Publications/Communications DevelopmentWhat is the best way for small medical communication groups to begin learning how to incorporate AI to write first drafts?We suggest starting with AI slowly. Start with trying it on discrete manuscript or abstract sections, such as background, introduction (non-proprietary info for "external" AI), methods, or other sections (with "internal" or private AI systems). Suggest taking AI use slowly by experimenting with AI prompting to generate summaries of published literature. Play with AI a bit to understand output nuances before jumping into full manuscript use. Practice on published manuscripts. Is there any validity to the idea that you could rewrite a manuscript that was drafted with AI to the point that it was free of AI influence?No. If you significantly rewrote a draft from any source, you would disclose the original source, regardless of whether it was human or AI. Have you received any feedback on a document created using AI from the end user/audience (i.e., publishers, patients, regulators)?ISMPP member research has shown readability comparisons between AI- and human-generated documents. Most platforms can generate comparable output to humans. AI still makes mistakes and, therefore, necessitates a human in the loop. How can we use AI day-to-day without negatively affecting data privacy or security? What are the safeguards for protecting patient privacy or proprietary data when using an AI model?Ensure you understand the copyright, data privacy, and security aspects of the AI system you are using, and comprehend your company’s AI policies, as well as the data privacy considerations of any parties involved in contributing information that would be processed by AI. If your source uses AI, should it be reported? For example, using the ConcertAI database to write an RWE paper.Yes. In the example, the use of ConcertAI as a data source should be included as part of the methods. We’re struggling with copyright challenges in terms of using AI to write summaries of open access and published data — is the workaround partnerships with the journal? Are there other things we’re missing?When summarizing content, regardless of whether done entirely by human or assisted by AI, the same copyright rules apply to the accountable human. Is it safe to enter an unpublished manuscript into an AI tool for creating a PLS?It is safest to keep unpublished manuscripts within the firewalled systems of your company. While some public AI systems permit restricting the system from training on user data, the risks of leaking non-public data are higher with these systems than with those within your company’s firewall. Could you please provide a guide on how to develop a prompt for PLS publication?Different prompting strategies—simple prompts, iterative prompts, conversational prompting, and chain-of-thought/tree-of-thought approaches—may be better suited for different tasks. Conversational prompting may be a useful approach for people who are not confident with prompt generation. Consider identifying your target audience in your prompt. However, different platforms will require different prompts and strategies. Consider refining your prompt using an LLM or a prompt-writing tool, which may also be helpful. Supporting info in this publication contains two example prompts for PLS generation: https://onlinelibrary.wiley.com/doi/10.1002/cesm.12041 If we create PLS for everything using AI, might this not add to the ever-growing flood of scientific data? Is there a need for greater curation too? Or some form of patient lay synthesis of multiple publications?This is a strategic decision to make with the authors. Does the creation of a PLS meet the needs of your audience? New tools like AI make it easier or faster to create these things, but it doesn’t necessarily mean you should do it simply because you can. How can we measure the efficacy of AI-driven solutions in improving medical communication outcomes?Utilize measures, such as time saved on very specific tasks, readability scores, error rates, accuracy, etc. As more use cases are discovered, the key performance measures of such use cases will become apparent. They will be project and situation specific. Assuming AI will take over publications and med comms for the most part at some point in the future, how do you see the role of med comms stakeholders (humans) changing?The key is to focus on adoption and the advantages it brings. Many tasks and drafts will be done faster, and comments consolidated more quickly, with figures and tables generated and formatted instantly, meaning publications can reach HCPs and patients more rapidly. However, humans still need to be in the loop for accountability and quality, requiring a focus on interpretation and discussion in context. While AI is likely to play a significant role in insight generation and content development, other areas for AI use are less clear, such as scientific platform generation, integrated evidence planning, strategic planning, and omnichannel orchestration. Thus, the assumption that AI will 'take over' med comms may not reflect its ultimate role. AI DisclosureICMJE/WAME guidelines ask for full disclosure (and sometimes publication in the manuscript itself) for all prompts and outputs generated. Personally, I would look mad if I wrote out my full conversations with a chatbot! Does the AI Task Force think these guidelines, whilst encouraging transparency, are also themselves barriers to medical writers experimenting with and using large language models (LLMs)?We don't believe this is a barrier. The AI Task Force interprets the ICMJE requirement to disclose prompts as reporting the prompts used when applying AI to data collection, analysis, or figure generation. Generally, if AI generated new content, it is best practice to keep (and report if required) the prompts used to generate the content. This reinforces our call to practice with AI ahead of using it with submitted drafts. In terms of disclosure, where wording states “use of AI should be disclosed” - where do you draw the line of “use” of AI? Content generation is an obvious yes, but what if just prompting ideas? Summarizing a disease area for background experience?When AI generates content included in the publication, it is important to disclose this. If AI is used for a brainstorm, but without any content (edited or not) from the AI included in the publication, this might not be disclosed. Example: If using AI to simply educate yourself, it might not be disclosed. However, if used as part of the research methodology, such as generating a list of publications as part of a review, then it would be important to disclose the use of AI per ICMJE guidance. Some journals allow for the use of AI in publication development if appropriately disclosed; are reviewers made aware of the use of AI in the pubs they are reviewing? If so, do you think that disclosure biases reviews?Reviewers *should* be made aware of AI use in publications they are reviewing, as it is a recommended disclosure per journal requirements and ICMJE guidance. General AI Learning & Queries Other than ChatGPT, Gemini, etc., are there any other AI tools you may recommend?The LLM Overview is posted on the ISMPP AI web page and provides an overview of various large language models (LLMs) available as of August 2024, in an Excel sheet format. Will the AI Lexicon be available online for the public or just ISMPP members once it is complete?The AI Lexicon is posted on the ISMPP website and is available to the public. What is the best resource to learn how to best create AI prompts?Prompt Engineering Guide: https://www.promptingguide.ai/ or Gemini for Google Workspace Prompt Guide: https://inthecloud.withgoogle.com/gemini-for-google-workspace-prompt-guide/dl-cd.html How do we encourage professionals to learn about AI and how it can benefit their outputs?Experiment with AI in a safe environment. Have AI open at every opportunity and have fun with it to see how AI might or might not benefit you. ISMPP has considerable educational opportunities about AI and has more planned for its members. Everyone wants to explore AI, but trepidation still persists - what can we do to encourage more test and learn scenarios?Practice and experiment with AI continuously and remember to "keep a human in the loop" to understand AI's limitations. Do you have any suggestions on how to increase AI adoption (change of mindset) within an organization?
How to mitigate risk of biases from the underlying data used to train the AI models?Test the AI output and remember that final accountability for content lies with the human authors. Also, be aware of how specific you are with your prompt. More detail with your prompt may help mitigate bias; however, you still need to verify the output. Is it really possible to get to zero bias when bias is inherent in the training of LLMs?Because AI is trained on human activity, which inherently has bias, it isn’t possible to eliminate bias completely. However, we can reduce or explain bias by ensuring appropriate training sources are in place and disclosed. As long as humans are involved—either in writing, software development, or interpreting the results—bias will exist. Responsible AI will have tools built-in to attempt to address or reduce bias. I work for a large pharma company with its own AI tools and high restrictions on use of public tools; however, we've seen an increase in agency partners using tools like Zoom's AI tool, Read.ai, etc., and sometimes they turn them on before we (client) are made aware. Any best practices for addressing incongruency between agency practices and client guardrails?Using AI during an online call is analogous to recording the call, and, therefore, it is best practice to gain acknowledgement to use AI during a call. If you're not comfortable in the use of AI during calls, feel comfortable asking to turn off the AI. The opposite of the question above - I work for an agency and would never use AI without talking to clients first, but any advice as to how to approach those conversations? It may require escalation to procurement teams in order to grant that permission and them demanding reduced prices - another massive barrier.On an individual basis, have the conversation about concerns and benefits in use of AI. Confidentiality? Privacy? Speed or efficiency gains? Discuss the value of using AI and mitigating the risk in use. Use of AI is an evolving landscape, and we are still evaluating how the use of AI translates into time and cost savings. How will you distinguish between 'artificial' and 'augmented' intelligence? Does AI always mean artificial intelligence? Will AI always be taken to refer to artificial? Will you have to speak/spell out 'augmented' every time?Artificial intelligence refers to the type of intelligence produced by machines. While there are multiple definitions, it generally mimics or mirrors human thinking to produce similar outputs. Augmented intelligence, as defined by Gartner, represents humans collaborating with AI to produce synergistic outcomes (1+1=3). The official ISMPP guidance is that when AI is used, it should be in the form of Augmented Intelligence when generating medical publications. AI refers to Artificial Intelligence. When referring to Augmented Intelligence, it should be spelled out. |