Are you struggling to balance high-quality medical education content creation with increasing demands for varied formats and accessibility?
As CME/CE professionals face mounting pressure to create more content across multiple platforms while maintaining scientific accuracy, traditional writing approaches are becoming unsustainable. Learn how leading scientific publisher Springer Nature is revolutionizing content development through AI integration, offering valuable lessons for medical education professionals seeking to enhance their content creation process.
Listen to discover:
Transform your CME/CE process by learning how scientific publishing's "human-AI handshake" approach can help you create more impactful educational content in less time.
07:44 AI aids in content conversion and audience targeting.
10:36 Encourage AI use transparently for in-house tools.
12:22 Exploring AI support for writing and social media.
17:08 Free research roundups offer interdisciplinary literature overview.
20:57 Novels prioritize style, factual books prioritize clarity.
23:29 Developed LLM prompting expertise with skilled writers.
25:19 Balancing human interaction with scalable automation.
We’ve built a brand new program for you that prioritizes foundational skills, implementation support, and connection. If you’re starting or growing a CME writing business in 2025, this is everything you need to hit the ground running. We’ve never offered anything like it and won’t offer it again for at least another year.
For early access to the bonus package that includes the program + a bunch of bonuses + free 1:1 coaching sessions, click here to get on the waitlist ✨.
There’s a limited quantity of this offer available, so this week we’re sharing it only with those who are on our secret list.
Is the Human-AI Handshake the Future of CME?
[00:00:00] If you're a CME writer or medical educator, you're probably feeling the pressure. Your stakeholders want more content, faster delivery and greater accessibility. They want research summaries, social media posts, slide decks, and multiple versions for different learner levels. And every time you sit down to adapt that complex clinical content for a new format or audience, you think there has to be a better way than starting from scratch each time.
[00:00:29] Well, Today's episode might just change how you approach content development. We're pulling back the curtain on the high. One of the world's largest scientific publishers is using AI to transform their content creation process. While keeping scientific integrity, intact. And here's the thing. Their solutions could revolutionize how you develop CME. Stephanie Pru's director of content innovation at Springer nature shares insights into how AI can reduce the TDM of publishing and support global initiatives like the United nations, sustainable development goals. We also tackle the delicate balance between automation and human oversight.
[00:01:10] The challenge of maintaining quality and integrity at scale, and the ongoing battle against fraudulent content. Listen to discover a practical framework for incorporating AI tools while maintaining content integrity and human expertise, including specific approaches to quality control and oversight. I'm Alex Howson. And on this episode of Write Medicine, we're exploring the human AI handshake. That could be the key to scaling your CME content without sacrificing quality.
[00:01:43] /
[00:01:47] Alex Howson: We're here to talk about incorporating AI tools into, you know, different parts of work in publishing. What prompted you to start bringing AI into your workflow? What, what, what were you thinking?
[00:02:03] Alex Howson: What were your initial goals?
[00:02:06] Stephanie Preuss: So one of our one of our missions is to accelerate discovery. And we think that AI can really help and support that journey. So our, our goal is basically to free up the researchers time to make publishing less cumbersome, less tedious, make it even enjoyable and save some time so that researchers can actually focus on their work on doing research.
[00:02:27] Stephanie Preuss: And then the other thing is. That they're like the global challenges, like for example, just sustainable development goals from the United Nations. And they ask for interdisciplinary collaborations. They ask for science communication across boundaries of discipline and subject areas. And this is again, something where AI can be really useful.
[00:02:48] Alex Howson: Can you give me some examples then of how you see AI kind of freeing up space and time for researchers?
[00:02:57] Stephanie Preuss: So, I mean, there's a, there's a lot of different [00:03:00] tasks if you go from the research results to an actual article. So definitely AI can help in a step wise process where we carefully to do that.
[00:03:08] Stephanie Preuss: So in, in a, in a broader vision, a long term vision, we would envision that researchers focus on the research, on the results, on the data, and we actually help them with AI tools to cater that towards different audiences and make sure they reach the right audience. A very first step is. different things around content conversion.
[00:03:26] Stephanie Preuss: So what AI can do really well is for example, translate content, but also not in the, and only in terms of language translation, but also in terms of converting, for example, a longer piece into a shorter piece, or coming up with something that is a little bit more plain in language that targets a more general audience.
[00:03:43] Stephanie Preuss: Then there's a lot of like accessory content around an article. Sometimes you have like bullet points highlighted a major findings, or you have the abstract itself is kind of a derivative piece, right, from the complete article. You will have a plain language summary for, for, by the audience. And then the other thing is science communication and science outreach.
[00:04:04] Stephanie Preuss: So a lot of researchers struggle with communicating their science effectively, especially if it is outside their subject area. So this is something where AI can help and just free some time during first drafts that then can be easily reviewed by an author. Because I think for authors, it's kind of an easy task to check if everything's correct.
[00:04:21] Stephanie Preuss: Sometimes coming up with the piece kind of difficult, but if they have a plain language summary in front of them, like going through it and just make sure that everything is correct and all the scientific nuances are reflected correctly, I think it's a thing that most of them find kind of easy.
[00:04:38] Alex Howson: So this is really interesting.
[00:04:39] Alex Howson: First of all, because I think
[00:04:40] springer nature
[00:04:41] Alex Howson: correct me if I'm wrong, was one of the first journal networks to issue a kind of AI compliance or ethics statements, you know, way back in probably 20, probably early 2023 or 2022.. So that's, so that's kind of interesting because when AI first started AI has been around for a long time.
[00:05:00] Alex Howson: The type of generous of AI that we're all getting. antsy about when that really first kind of burst onto the scene, you know, a lot of people were really twitchy and concerned about what this was going to mean for publishing, but you seem to be really optimistic about using AI tools to support authors.
[00:05:20] Alex Howson: So a couple of things that I want to kind of dig into there. One is when you talk about using AI to create plain language summaries to support the researcher, to support the author, what kind of tools are you talking about? Do you have specific things that you recommend?
[00:05:34] Stephanie Preuss: So what we usually do is that we build in house tools ourselves.
[00:05:38] Stephanie Preuss: So it's very important for us that we have. a policy that allows the use of AI. So we think banning AI per se is not a good solution because it will leave people either at a disadvantage because they can't make use of AI tools or they will use it anyway and they then will just make, not make it transparent.
[00:05:55] Stephanie Preuss: And we think just to, to foster like exchange to give them the opportunity to experiment [00:06:00] AI tools, be, Want to allow use of AI for written texts. And we want like similar to other tools and methods that people make it very transparent, how they have used it and where they have used it and what exactly it was that they have used.
[00:06:13] Stephanie Preuss: And we think that's really the way forward. And the other thing is like, you, we don't want to leave people alone because there's a lot of different tools and a lot of them can be used and sometimes they give good results, but it's also kind of A challenge or an art to prompt them to get to good results.
[00:06:28] Stephanie Preuss: And of course, we also don't want to encourage people to upload unpublished stuff to open tools that might use the content further for training or other things. So instead, we think it's good as a publisher. To combine like the editorial expertise and the human expertise that we have with tech and provide solutions and provide strong guardrails for people and provide things they can use safely, even with unpublished material, rather than just leaving them in the open space and say, here's a trillion tools and you find out yourself a way.
[00:06:58] Stephanie Preuss: How do you use them and what to do with them?
[00:07:01] Alex Howson: Kind of eliminating that trial and error process. So can we talk about process then? How do you anticipate or how are authors working with or researchers working with you to streamline the research development and publication process?
[00:07:17] Stephanie Preuss: So for the paper writing itself, I think it's, it's very early days and we were very carefully exploring together with the research community.
[00:07:24] Stephanie Preuss: So for example a few weeks ago we had a workshop with researchers in, in Southern Germany where we sit and we were discussing what AI can do to support review writing. So we're just going out there, we're seeing how they usually work, how the process is, how we can support and, and build something together.
[00:07:40] Stephanie Preuss: Similarly to come from results and data. to an actual paper. We've, started experimenting and, and doing that. One thing that is a little bit more progressed is for example, social media content. So we started as a service to give authors just small pieces and summaries of their text or of their complete article that they can put on their own social media channels to promote their work and to make
[00:08:09] Stephanie Preuss: And also I think it's very important, for example, to be present on social media and to do that because there's a lot of misinformation out there. And I think it's important that we kind of give science a broader platform and help people to be active and put something in contrast to misinformation and other stuff out there.
[00:08:27] Alex Howson: So you're being pretty proactive. How are you identifying research groups or researchers to work with?
[00:08:34] Stephanie Preuss: An interesting question. So a lot. Some of the stuff we do is just local. So we have an office, for example, in Heidelberg. So, and we have a local researcher group from the Heidelberg University that is experimenting together with us and they are very close by.
[00:08:49] Stephanie Preuss: So it's very easy for us to just connect and be in touch. And do something with them. We have an office in Pune, so we have some researchers in Pune that we can reach out to so a little [00:09:00] bit of, of personal, personal network, I would say that we have built over time of people that we trust, because I mean, there's a lot of we are quite open and transparent and for, Our first AI written book, for example, we even had the press come along and they were reporting on how we did that together with the authors, like how they went through the process of writing the first GPT generated book.
[00:09:21] Stephanie Preuss: But of course there's also you need to have a trusted relationship because a lot of the things that we share with the researchers are really early days.
[00:09:30] Alex Howson: GPT written book?
[00:09:33] Stephanie Preuss: So it is basically a stepwise process. And there is like, there is a lot of authors that have. A lot of wisdom to share, but they never would publish a book because they don't have the time.
[00:09:43] Stephanie Preuss: It's like really time consuming. People take two years or even more to write a book. So we thought, okay, can we build something that in a very stepwise fashion, very guided and safe, allows authors to come in a book and be set in seven days? So it starts with like ideation and everything the human author still has to have all the knowledge.
[00:10:06] Stephanie Preuss: So the AI is not there to fill any knowledge gaps or something, but it can, for example, just help you from a good list of good bullet points, very good bullet points, just draft you a piece of text. It can help you to make a better title of your book. So the idea would be to, you know, you give a structure.
[00:10:24] Stephanie Preuss: And then you give feedback to that structure and then you go step by step and basically fill in the gaps. Of course, there's things like suggesting literature to cite or to read things like that. We also have machine generated books, which are kind of we call them research roundups. So they are kind of a clustered report of research papers where you say, okay, there's a certain topic that we want to address.
[00:10:46] Stephanie Preuss: And then we search for all the papers that have been published in that area. And then the AI clusters them into different knowledge spaces, basically. And then each of the papers summarized. And then the author, like, makes a fact check, gives an introduction. Why this is interesting. And a perspective to it, a human touch, and then you can read through like the summaries of the different papers that have been published in that space.
[00:11:08] Stephanie Preuss: And then a lot of times there's a human conclusion added to it. So there's different kinds of machines. So the research roundups, the AI generated books, and then the GPT generated books are a little bit different in nature. One is like a generic book, one is more like a literature overview piece but there are multiple ways that AI can really, or is already today, helping our authors, our book authors.
[00:11:30] Stephanie Preuss: To come to a solid publication
[00:11:32] / do you like black Friday deals?
[00:11:35] We've built a brand new program for medical writers who want to specialize in CME and are looking to grow their business in 2025 in a way that prioritizes connection curriculum and support. There's a limited quantity of the deal available. So this week we're sharing it only with those who are on our secret list.
[00:11:56] So if you're interested, Click the link in the show notes. And we'll [00:12:00] send you an email with more information. After that we'll be sharing the official announcement with everyone else. And no back to the episode.
[00:12:10] Alex Howson: What's the response from your authors? What are they thinking? How are they feeling about this process? What do they tell you about what they like, what they don't like?
[00:12:20] Stephanie Preuss: So for the research roundups there, a lot of times they're impressed and amazed that that we do that.
[00:12:25] Stephanie Preuss: And we do it for, for book authors after signing the contract. It's actually a free service. And there's like, you can use it to actually publish a machine generated literature overview, but you can also just use it to inform yourself and to do the literature reading and, and reviewing before you start the writing process.
[00:12:42] Stephanie Preuss: So it's basically a free service to make sure that people start off start off well. Yeah. In their journey. So a lot of people very much appreciate it. What I think is nice is that sometimes it picks out. It has a more broader and interdisciplinary perspective because it looks. It doesn't like every human has a certain education and a certain background.
[00:13:03] Stephanie Preuss: So you look at things a certain way. So for example, there might be a biologist looking at climate change, but there are like geoscience aspects, their political aspects, their social science aspects to climate change, their communication aspects. So all different kinds of perspectives and a research roundup can really help people discover something that is like not in their core field and has not been on their radar.
[00:13:26] Stephanie Preuss: And I think that is very, very valuable. Because then it gives insights and people might cover things and consider things that they otherwise wouldn't have considered.
[00:13:36] Alex Howson: So you can use AI as a kind of thought partner. I don't really like that term, but as a way to kind of stimulate a different way of thinking about material that you're really familiar with as an author or a researcher.
[00:13:47] Stephanie Preuss: Yeah. Okay. Yeah, or just highlight stuff that you might have missed otherwise. Like just saying like there is a geoscience perspective, but somehow it's related to what you're doing here with writing here about climate change and probably you want to read that. And then as a human, of course, the responsibility is with you.
[00:14:01] Stephanie Preuss: So you make the decision and say, yes, that's interesting or no, thank you. But anyway, at least you have seen it. So it's an opportunity. And then the question is what you do make out of it. And for the GPT written books, I think we really have like a completely new set of authors with this, because as mentioned, a lot of people don't have the time or don't want to take the time.
[00:14:23] Stephanie Preuss: A lot of, for a lot of people, I think it's free time. Like they did not pay to write a book. They do it on the side, especially researchers. So you do it on the weekends or in their semester breaks. And I if they can get support. So That their work is very much focused on the knowledge work and not on making beautiful texts.
[00:14:44] Stephanie Preuss: Because that's also not everybody's expertise, like some people are really good with it, but some are good scientists, but actually not so good writers. So I think if you can take away a little bit of that work, putting the knowledge work into texts, I think that is very much appreciated and very helpful for people.[00:15:00]
[00:15:01] Alex Howson: And can you set so, you know, as you're saying that I'm thinking of a couple of things. One is a lot of people who who would see themselves as writers. So maybe they're, maybe they're researchers but they also really like the writing process and I have a reasonable facility with it.
[00:15:18] Alex Howson: They would say things like, you have to develop a voice as an author, All authors have a certain style. You know, they have a way of writing that is kind of uniquely them. And I guess if you're new to writing a book, you're less concerned about that. You're more concerned about getting your information out there.
[00:15:38] Alex Howson: But do you see a balance there between developing an AI model that also allows authors to develop their own kind of voice and writing style? Or is that something
[00:15:50] Speaker 2: that's not
[00:15:52] Alex Howson: really under consideration?
[00:15:55] Stephanie Preuss: Yeah, I think the books, the books that we're talking about have a certain, are from a certain nature.
[00:16:02] Stephanie Preuss: So I think if you, if you write novels, for example, it's very important that they are a pleasure to read, that they are well written, that the language is really good, that they're enjoyable. You might like a certain author and their style, and that's very important. If you look at a professional book about statistics, I don't know.
[00:16:22] Stephanie Preuss: Then the unique voice of the author is not so much of an importance. Important is that the information gets from person A to person B and that people can actually find it and they can learn from it and that they can use it. And that we bring like science into, into practice that people actually take the time to make sure that their findings also reach the audience.
[00:16:43] Stephanie Preuss: And if that means that there is not this personal touch to it, I think for a professional book Or for a scientific article that can be acceptable, and I also think that it can really help break down barriers, so there are people that are really brilliant in writing, a lot of them are native speakers, and for them, it's really easy to put their findings into a scientific article, but there's also people that have English as a second language, and for them, it might be more tedious, it might take longer, and even if they put a lot of effort, they might not be able to communicate their findings clearly enough to position them well.
[00:17:18] Stephanie Preuss: And I think that disadvantage can also be kind of averaged out a little bit if AI helps them to draft something.
[00:17:25] Alex Howson: Yeah. It's interesting. There's a kind of democratizing effect here, or at least there's an inclusive imperative behind how you're describing the use of AI
[00:17:35] At Springer nature.
[00:17:36] Alex Howson: So I have a couple more questions.
[00:17:38] Alex Howson: One is we hear a lot about prompts and prompt engineering. Can you talk a little bit about where, what the status of prompts is in your AI work? Okay. How important are they? How do you use them? How do you encourage your authors to think about prompts?
[00:17:55] Stephanie Preuss: So I think that the prompts are for a lot of the things we're doing.
[00:17:59] Stephanie Preuss: The prompts are [00:18:00] really, really important. So a lot of times we use large generic models from different big companies. OpenAI, Google, you name it. There are multiple out there. And the secret sauce that we are adding is basically in the prompt. So we have that human expertise. We have strong editorial.
[00:18:18] Stephanie Preuss: They know how to write things. And when, you know, when, when LLMs became more available, And more prominently, we started to develop that prompting expertise. And we actually started developing with formal researchers slash writers, and editorial people. And I think they have a very special way of approaching it and of, of prompting.
[00:18:38] Stephanie Preuss: And by now we have built quite some expertise of how to do that. And a lot of times if we say something like, you know, we do plain language summaries, people say, yeah, I can do that with chat GPT. Yes, you can, but try, like, give it a try. Upload something, please not upload any copyrighted material, but something else upload something and, and say, just say, give me a plain language summary.
[00:18:59] Stephanie Preuss: It, it will give you something, but it's not of the same quality. Then if you do it very, very sophisticated prompt, like there is a lot of, of magic in a good prompt and probably over time that will get less important. So the models get. Better and better. So we see at the beginning with early models. It has been really, really important.
[00:19:19] Stephanie Preuss: Now, early models are less sensitive. So at some point, it might not be an issue anymore. But at the moment, I still think it's like really important. And again, like there would be the opportunity that. To just say, okay, every, every scientist needs to become a prompt engineer. They need to learn how to do it.
[00:19:35] Stephanie Preuss: Like every medical writer needs to learn how to do good prompts. Or we put the expertise together in tools and services that already incorporate prompts so that people don't have to think about it. And they think about a content generation process and about quality and integrity. And not about technical things like the prompting and, and the writing.
[00:19:56] Alex Howson: So you're really making it easy for authors. Have you encountered any challenges or bumps in the road that you didn't anticipate when you launched this initiative?
[00:20:05] Stephanie Preuss: So something that's always challenging is like balancing, scaling things and the human in the loop. Like we really want to have the human in the loop scenarios.
[00:20:14] Stephanie Preuss: We want, we don't want to have automation. So some things. Probably you could automate, but we don't want to, we want to have a strong hang shape between the AI and the human. And we want to build solutions that not only allow human interaction, that that really encouraged it or asked for it. And I think doing that at scale and balancing that is tricky and, and going like slowly enough to make.
[00:20:37] Stephanie Preuss: Say safe solutions to make progress that are high quality and, and to keep research integrity. And at the same time going with the speed of development of things like technological developments are super fast. So I think that's also something that is, that is kind of challenging and tricky. And then of course, like with the content generation capabilities, there's also a lot of misuse, right?
[00:20:59] Stephanie Preuss: There's [00:21:00] paper mill, there's fraud content, and that's a little bit of an arms race. Like, it gets easier to, to generate fake papers. It also gets easier to detect them. So there's AI to detect them. There's AI to, to generate them. But that's a little bit of an arms race that you know, we, we necessarily need to participate in.
[00:21:17] Stephanie Preuss: And, and of course, that's also, that's also challenging.
[00:21:22] Alex Howson: So publication networks are, they're already using I mean, obviously detecting plagiarism software has been around for a long time, but you're already in the game of developing those tools to identify what's been AI generated when you're looking at publication submissions.
[00:21:37] Stephanie Preuss: Yeah, and it becomes, it becomes tricky because we want to allow people to use AI, right? If they do it responsibly, it's fine. Like, it's not that we say, okay, we detect and flag AI generated papers. They're making that a distinguishing between fraud and paper mill and somebody responsibly and transparently using AI for generating an abstract is, is very important.
[00:22:02] Stephanie Preuss: But yes, of course, like integrity checks and scanning scanning papers is something that AI can help with. And that is, of course, an additional layer of safety to the human eyes and editorial to say we have a technical we have technical checks in place to highlight those things to our editors.
[00:22:18] Alex Howson: So lots of positive benefits there.
[00:22:21] Alex Howson: Crystal ball moment, looking ahead at the next five years or so, three or five years, as you said, you know, things happen really quickly in this field. Maybe we're talking about the next two years. What AI developments do you think are most significantly going to affect medical publishing?
[00:22:37] Stephanie Preuss: Like, I think like all the accessory pieces, so everything that's around the article at some point will be AI generated by default.
[00:22:43] Stephanie Preuss: So I think that's that the human role will really to be the fact check and make sure everything's correct. But really, I think also like the article itself and how that is generated or written, that will also probably change tremendously. So I hope that our division is that in the future, researchers are focusing on doing research and they focusing on doing that well and, and doing that fast. And then articles are gen are generated by AI for different knowledge levels for different audiences and can be published a lot faster and hopefully a lot more enjoyable.
[00:23:19] Alex Howson: Stephanie Pruse, AI visionary. Thank you so much for sharing your wisdom and insights with Listeners of Write Medicine.
[00:23:25] What strikes me most about my conversation with Stephanie is how the human AI handshake concept could transform our approach to CME development. As educators, we're not just creating content. We're building bridges between complex clinical research and practical healthcare delivery. And maybe like Springer nature.
[00:23:46] We can use AI not to replace our expertise, but to amplify it. So, if you're thinking about incorporating AI into your content development workflow, start small, consider where you spend most of your time adopting [00:24:00] content for different audiences. That might just be your sweet spot for experimenting with AI assistance while maintaining that critical human oversight. Thanks for joining us on this episode of right medicine.
[00:24:12] I'm Alex Howson and reminding you that every CME activity you create has the power to support health professionals and improve patient care.
[00:24:21] Until next time. Keep writing, keep learning and keep making a difference.