Have you ever wondered what happens behind the scenes of scientific breakthroughs? Is the future of science at risk in the digital age? Like many fields, science is undergoing rapid changes in the digital era that could compromise integrity and innovation. As consumers of scientific knowledge, we all have a vested interest in an effective system. Join us in episode 99 as we explore the evolving world of science in the digital age, with insights from biochemist and former Cell editor, Ben Lewin.
In this episode, you'll:
Don't miss this illuminating discussion on the complex forces that could undermine science in the 21st century.
1. Science is not a linear, seamless process. Despite the perception of scientific progress as a smooth and consistent journey, it is often messy and characterized by zigzagging developments. The public needs to understand the principles and limitations of the scientific process to demystify the black box image of science.
2. There are emerging concerns about the influence of artificial intelligence, the shift towards "big science," and the potential lack of revolutionary ideas in scientific research due to the pressure to publish safer and more reliable work. Scientists are also at risk of becoming data-generating technicians rather than pursuing innovative questions and answers.
3. Criticisms have been raised regarding the hierarchy of scientific journals, the peer review system, and the prevalence of predatory journals. Despite its flaws, the current system of scientific publishing is considered the best available option for disseminating scientific knowledge.
Lewin B. Inside Science: Revolution in Biology and its Impact. Long Island, NY: Cold Spring Harbor Laboratory Press. 2023.
Don’t forget to subscribe to the Write Medicine podcast for more valuable insights on continuing medical education content for health professionals. Click the Follow button and subscribe on your favorite platform.
[0:05] Welcome to Write Medicine, where we explore best practices in creating continuing education content for health professionals.
I'm Alex Howson and I'm on a mission to share expert insights and field perspectives on topics like adult learning, content creation techniques, effective formats and trends in healthcare that influence the type of continuing education content that we create.
Bright Medicine is the premier podcast for CME CPD professionals like you, wherever you are in the content creation process. Join us.
[0:39] Music.
[0:47] I would like to see science moving to something like a sort of thread, like an email thread or a Twitter thread, in which you start out in science, you establish a new thread when you have something in a new area, you publish your first results in it, and then you add to that thread as you obtain new results later on.
[1:06] Music.
[1:13] Have you ever wondered what happens behind the scenes of scientific breakthroughs?
Is the future of science at risk in the digital age?
Like many fields, science is undergoing rapid changes in the digital era that could compromise integrity and innovation.
Innovation as consumers of scientific knowledge we all have a vested interest in an effective scientific knowledge production and publishing system join us in episode 99 as we explore the evolving world of science in the digital age with insights from biochemist and former Cell editor Ben Lewin who's written a new book inside science revolution in biology and its impact In this episode, you'll gain insight into flaws in the scientific publishing process involving peer review, preprints, and the publish-or-perish paradigm.
You'll understand concerns over reproducibility, questionable research practices, and the influence of money in shaping projects.
And you'll learn how technology and artificial intelligence are transforming science into a data-driven numbers game obsessed with quantity over quality.
Don't miss this illuminating discussion with Ben on the complex forces that could undermine science in the 21st century.
This is a must-listen for anyone passionate about the future of science and knowledge.
[2:39] Music.
[2:48] And welcome, Ben. Hello. Nice to see you.
Well, actually, I can't see you because we're not able to record with video today, but that is okay.
It is a podcast, and we are fully invested in audio.
So thanks for joining us on Right Medicine, Ben. Could you just share a little bit about who you are and what you do?
Well, I started out as a scientist with the intention of doing research in biochemistry. industry.
But when I actually started doing research, I found that it was so time-consuming that I lost sense of the thread of science as a whole.
In fact, I was criticized quite a bit when I started out because I spent so much time in the library reading about other things.
[3:33] It was felt I should be doing my own research, not reading.
But for me, that took away the point of the whole thing.
If you couldn't understand what was going on in science generally, then it didn't feel like you were really sort of participating in it.
[3:46] So one thing led to another, and I went to the journal Nature in order to start a sub-part of Nature called Nature New Biology.
[3:55] I was there for a year or so, and then I came out to the, this was in England, and I came out to the States, and I started the journal Cell to do things in a new way.
Up to that point, you could only really publish an important scientific observation, if you publish it in a very abbreviated form that didn't really give people a proper understanding of how to reproduce it, and then months or even years later you'd publish a full account, I didn't think that was a very satisfactory way to do science.
And so I started Cell as a journal in the life sciences with the objective of publishing quickly and completely.
This was very successful. Cell started out at MIT. It rapidly became the leading journal in biology.
In fact, I would say we published many, maybe most of the major breakthroughs over the next couple of decades.
This was in the 70s, 80s, and 90s. I continued as the editor until 1999, when I just decided I would do something different.
At that point, I started writing about wine, and I've written a series of books and guides on wine.
But I have become increasingly concerned about the way science isn't well understood by the public.
I decided it'd be fun to go back to science, although not in quite the same capacity as before.
So I have started writing about science for, I suppose you would say, for the interested layman.
[5:18] That's interesting. I remember sometime, maybe, let me see, 15 years ago or so, either the UK government or the Scottish government launched an initiative on the public understanding of science.
I'm pretty sure it was anchored at the University of Edinburgh.
I'll need to kind of dig into that and try and pull up some information for the show notes.
But that sense of the deficits of the public understanding of science is something that's been around for a while. It goes way back.
The first occasion I found of a complaint about it comes from Charles Babbage, the polymath of the mid-19th century, who wrote a book or something like on the calamitous decline of science in England.
And then, you know, 50 to 60 years later, there was a committee, I think, from the Royal Society that deplored the terrible state of science education in England.
And then, of course, more recently, there was C.P. Snow and the two cultures complaining that the humanities and the sciences are just different things and people in the humanities don't understand the sciences.
This is a really long-standing division.
[6:32] Yeah. So what should we be understanding about science?
You say you're writing for the layperson. What are some of the things that you're finding that are deficits in that understanding?
Well, I think the main problem is that people who are not scientists view science as a sort of black box.
It's a sort of caricature of science, really.
They view it as something which delivers results, which lead to technology and so on, but they don't have any real understanding of what is involved.
By that, I don't mean that people should be able to understand the technical details.
I mean, indeed, people in one area of science don't necessarily understand the technical details in another area of science.
But I think it would be much better if the public were able to understand the principles on which science works.
During the COVID epidemic, we had a lot of cries of follow the science, that it was painfully obvious that many of the people yelling, follow the science, actually had no idea what science was about.
And that, I think, explains quite a lot of the missteps.
People don't understand where science can be relied upon, where it has limitations.
And I think it would be a much healthier situation if instead of either having to accept or reject results without any understanding, there was some ability to understand the means by which those results are obtained and what sort of limitations that places upon them.
[7:59] And so the object of my book is really to say to people, this is how science is really done, to explain how science works.
You might say warts and all, I suppose, because it's not a perfect endeavor by any means.
And I don't think it should be idealized or mythologized, but I do think people should understand how it really works.
[8:20] Well, take us inside science, which in fact is the name of your book, Inside Science, Revolution in Biology and Its Impact.
Can you kind of take us inside science a little bit and describe how you see it working in 2023 and not working?
Well, people have a view of science, a very sort of, as I said, caricatural view, in which it sort of progresses in a straight line.
People have hypotheses, they test those hypotheses.
[8:50] Depending on the results, they support the hypothesis or reject it and go for another one. And the whole thing moves forward.
That view of science is very much encouraged by the scientific paper, which is a sort of mythological endeavor in itself.
Back at the start of the 17th century, Francis Bacon, famous for saying, knowledge is power, also said that discoveries recoveries are never reported in the order in which they are made.
And he was centuries ahead of his time.
So results in research are usually presented as though the work was done in a very logical way.
But science is much messier than that.
It's a zigzag. And a lot of scientists are stumbling on interesting observations and pursuing them.
But then when you write the work up, you write it up as though the whole process had been entirely logical.
It's an irony that that science itself relies upon being completely truthful about your results, but writing them up is a misrepresentation of what happens.
And so I try to explain this, how science really works, how it validates itself.
[10:00] With the idea that people will have a better understanding of the process.
And in your work as editor at Cell, what were the manifestations of that?
[10:14] Problem you know the idea that science scientific papers are written up in a way to make the process look seamless and without rough edges it can be forced so when we would receive a paper itself it would always start out with an introduction which said sometimes previously we have shown something or sometimes you know it is generally known that or believe that and so on and so forth with references given and if it was not an area i was familiar with i would go and look up those references.
And I just can't tell you how many times it turned out that what had previously been established or was previously thought to be established was very similar to what was represented in the paper, but not exactly the same.
There is this constant sort of spin, as it were.
And for me, that was the main issue in assessing a new paper in an area I didn't know, was to get an accurate bead on what had been known previously.
[11:17] Authors try to present their work in such a way as to imply that they have had a novel idea and they have tested it.
That's not always the case. In fact, the way science works today, and this became an increasing theme while I was editor of Cell, was that because it is more difficult to get funded, and because you need to have a reliable stream of scientific papers in order to maintain your reputation, in order to get tenure to university, there is a tendency to play it safe.
Instead of going after something which is a bit way out but might be revolutionary.
[11:56] To go for something which is much safer, more reliable, more likely to produce results so that you can keep your stream of papers going.
That's been a tendency probably for 50, maybe even 100 years, but it's become much accentuated, I would say, in the last 20 or 30 years.
Yeah, that pressure to publish is definitely strong. I am not a scientist.
[12:20] I'm a social social scientist.
But I've worked in higher education, and I know how strong that pressure is to get work out there.
But it's interesting the way you describe that in terms of playing it safe, because I think my next question is around, you know, there's something going on in higher education, in academic institutions that is reproducing or reinforcing that sense of safety.
You know, young scientists are being taught to write in a particular way and to, you know, format papers in a particular way.
And the other thing that's happening, of course, is that pressure to publish is resulting in just a flood of predatory journals to house that work.
Did you, so is this something that you focused on in your book? Yeah.
[13:09] Yes, I talk about the question of the scientific paper, and I address the issue of the irony, really, that a scientific paper today looks very much like a scientific paper looked 100 years ago.
But it was originally formatted, the whole design of the scientific paper was predicated on the fact that it had to be included in a journal, which had to be printed, and which had to be distributed.
Now that we are moving to an electronic era, that is no longer true.
And I think it raises the question, is the scientific paper still the appropriate form for communication?
Communication there has been the main impact of the digital arena has been to make scientific papers make it possible to publish them much more quickly and to make it possible to give more details lots of ancillary information which people can go to if they want to check stuff all of that was difficult before but the format of the paper the concept that you write a discrete paper at a certain point in time, and that you publish it in a journal, one of many, many different journals, that's still there.
And I am not sure that is the way science should proceed in the future.
[14:24] Can you say a little bit more about that then? Because I think certainly in the practice of medicine, in the writing up of scientific medicine, that model is still very very mainstream.
For people who are working on phase three clinical trial data, their goal is often to get published in the New England Journal of Medicine.
That's the creme de la creme for getting that phase three clinical trial data out there. What's the alternative?
Well, some journals do, of course, have much higher prestige than others.
In medicine, you have the New England Journal you have the Lancet in in science in general you have you have Nature and and Science and Cell known somewhat pejoratively somewhat affectionately it's the CNS and it makes a big difference if you can publish in one of those journals on the other hand out beyond that there is a very large number of journals of very variable quality, most of them published by large commercial publishers most of them so horrendously expensive that only a major library can afford to buy them.
[15:33] And I don't think that furthers the cause of science at all.
So when you get a new scientific paper and you read it, it's going to have references probably to prior work in maybe a dozen, maybe 20, 30 journals.
And it's really very disruptive to go and find each of those journals one by one and look up those papers.
I would like to see science moving to something like a sort of thread, like an email thread or a Twitter thread, in which you start out in science.
You establish a new thread when you have something in a new area, you publish your first results in it, and then you add to that thread as you obtain new results later on.
That would have the advantage that it would all be in one place, and the issue to which I referred before of spin, of going back to discover that what was previously known wasn't quite what the paper represented it as, would of course be much less, because it would all be in one thread.
So it would be immediately obvious what had been discovered before. four.
Obviously, we need some sort of gatekeeper function, because if people just put stuff up without any sort of peer review process, quality will be very variable.
We would have to work out how you would do peer review on something like a thread.
We would have to work out how to replace this myriad of journals with something more centralized.
[16:52] That's already happening to some degree. Some publishers now have websites on which they basically put all their journals, and you can go to that website and say, give me all the papers on some topic across all the journals.
So the journal is beginning slowly to disappear. The journal was invented basically as a distribution mechanism.
Now that we have the internet, we don't need that distribution mechanism.
So we need, I think, to do two things.
We need to replace the journal with something more centralized, but yet controlled in the sense that we have peer review.
And we need to change the format of the individual contribution from a paper into something which is more flexible.
[17:30] That flexibility will chime well, I would imagine, with younger generations that betrays my age or my vintage, but certainly kind of Gen Z, Gen Y, who are much more familiar with shorter, more accessible types of information.
What about peer review? You know, you touched on that a little bit.
There have been some criticisms of the peer review process for quite a few years now.
So let me backtrack and say exactly how it works.
You send a paper into a scientific journal. The editor of the journal looks at the paper, decides who else in that field is sufficiently expert to tell him whether it's correct or not, and sends it out, usually to two peer reviewers.
[18:21] The strength of the system is that the same people are peer reviewers and researchers.
So I may send a paper into a journal. It goes to Dr. X to review.
Two years later, Dr. X's paper might come to me to review.
[18:35] So everybody has an interest in making the system work because they play both sides of the game. Of course, there can be conflicts.
You try, for example, never to send a paper out to someone who is doing research in exactly the same line of country because they would have a vested interest. interest.
I would say that when I was running Cell, probably 90% of the papers we sent out received basically the same recommendation from both reviewers.
So it was fairly straightforward as to what to do with it. And about 10%, the reviewers would have disagreements, sometimes quite violent as to whether the paper was really good or really bad.
And then, of course, occasionally there would be some conflict of interest or some review that wasn't quite honest.
I'd say on balance, the system works pretty well.
It's far from perfect, but it's very much, as Churchill said about democracy, it's the worst system we've got, except for all the others that have invented it.
What about the tendency for some journals, and thanks for describing the ideal process, the rhetoric, if you like, because one of the things that's happening in publishing, of course, is that a lot of journals are moving to you submit your paper and you suggest.
[19:46] Potential peer reviewers.
[19:48] What's your take on that practice?
[19:51] I would be very skeptical of anybody suggested as a peer reviewer by an author.
I think it's quite important you keep the process independent.
I should have said, I omitted to say, because I just took it completely for granted, that the peer reviewers are anonymous, that the authors do not know who they are. Now, that also is changing.
Some journals now are using signed reviews, and that goes back to the issue of the thread we were talking about earlier.
You can publish the author's research results, and you can publish a signed review with them, saying, this is what I see as deficiencies or advantages, and let everybody judge for themselves.
[20:28] I've changed my mind a bit about this. I always used to think reviewers ought, must be anonymous, because if they weren't anonymous, they wouldn't feel free to comment without fear of retribution.
And I also worried that if the authors knew who they were, the authors would put pressure on them.
And I felt that the process should be kept quite independent of that.
I've sort of come around to the view more recently that if you're writing a fair and honest review, why shouldn't you put your name on it?
I think the risk of moving to signed reviews generally is that reviewers may tend to pull their punches because they won't want to run the risk of retribution.
And so the process will probably be less precise than it used to be, but there would be advantages to that as well.
[21:16] There is something to be said about having a transparent conversation about not just the results of the science but the process of deciding how and when those results should be shared with the public yes and things have moved a bit in that direction so for example when i was running cell it was taken as absolute ground zero that you did not publish public you did not publicize your results in any way, except by sort of talking about them at scientific meetings, but you certainly wouldn't put out any written communication about your results before the paper had been published.
And that's really been stood on its head now in the form of systems for distributing what used to be called preprints, i.e.
Scientific papers that have not yet been reviewed, not yet necessarily been submitted to a journal. journal.
Indeed, one journal has now said that it will only consider papers for publication that have been circulated in a preprint system.
I guess the thinking there is that if there's something wrong with it, showing it to lots of people in the preprint system will wrinkle out the errors.
That would be somewhat of a substitute for individual peer reviewers, I suppose.
[22:30] We saw a lot of preprints in the early days of the pandemic, sort of early to mid-pandemic, but there were a lot of problems with those preprints precisely because they weren't peer-reviewed and some of those preprints were subsequently found to have quite significant errors. What's your take on that?
I think that that's partly a function of the pandemic, when you have a new area of science or medicine with immediate implications.
[23:01] For human welfare, which is not at all well understood, lots of people jump into it.
We saw very much the same thing with the AIDS epidemic.
When AIDS started, lots of people who were not necessarily very good scientists, but could see that it was, I hate to be sort of pejorative about it, but they could see it was easy pickings really, went into the area, started publishing not very good papers.
We saw the same thing with COVID, lots of not very good papers, because people rush in because it's fashionable, it's high profile, and that's a real weakness to pay attention to those papers before they've been peer-reviewed.
[23:38] But even peer-review may not save you.
I mean, there are plenty of peer-reviewed papers that turn out not to be very good because they're published in not very stringent journals.
You need really to distinguish between, you need to, in assessing a paper, you need not only to think about has it been peer reviewed, but what is the general reputation of the journal in which it's been peer reviewed.
[24:05] And that the metrics for determining the reputation of a journal have been shifting over the last, I don't know, two decades or so, at least.
Can you talk a little bit about what you're seeing there in terms of, you know, how should a young scientist, for instance, who is eager to get published, start thinking about, you know, which is going to be an appropriate journal with a solid reputation?
Well, there's a whole hierarchy of journals. If we go back sort of maybe 50 years or so.
[24:41] Basically, as we said earlier on, two journals in medicine stand out and two journals in science as a whole stood out, Nature and Science.
And about, I forget exactly when, but I think around about the late 1960s, early 1970s, the concept of the Citation Index was developed.
That's a sort parameter for assessing the impact of a journal or a scientific paper, which basically looks at how many other people cite it after it's been published.
When that was developed, it became apparent that there's a very high concentration of important research into a very small number of journals.
[25:20] You can, as it were, order a hierarchy of journals according to their citation impact in medicine. Lancet and New England Journal would come at the top and others would come much lower so the way people tend to to submit papers is they go to what amounts to the highest journal in the hierarchy that they think would accept the paper and if it gets rejected then they try a journal that's lower in the hierarchy the citation the measurement of of the impact of papers by citations is a very mixed bag you can see that citations are an imperfect science by what happens to papers that are retracted.
Sometimes papers are cited more often after they're retracted than before.
And probably that just means people didn't know they were retracted.
Probably they're lifting citation lists wholesale from other places.
I have to say, it gives you a slightly cynical view of the whole process.
[26:20] Absolutely. So we've been talking more about publishing than about the actual science itself.
So let's kind of pull back to one of the things that you said right at the beginning of our conversation about how science works and how it functions.
What are some of the kind of key changes that you've been able to trace in scientific conduct over the period that your book focuses on?
Let me backtrack to the principle of how science should work.
This is the famous a self-correcting mechanism.
You publish a paper, you have a theory, the theory makes some predictions.
Somebody else tests those predictions.
[27:04] If they find that those predictions are not borne out, then other people will go back and look at your data and say, are your data correct?
Are your data incorrect?
And sooner or later, those data will be corrected.
It's a self-correcting mechanism, which means that down the road, anything that isn't right or anything whose implications can't be substantiated will, as it were, be thrown out of science.
Now, I say sooner or later because there can be quite long delays. ways.
So there's this sort of strange dilemma, really, runs through all science.
On the one hand, there's a constant questioning as you look for new data.
Just as a side note, people sometimes sort of view science as being arrogant or scientists as being arrogant because, you know, they believe in the objectivity of their data.
But actually, I view science as a subversive activity because the whole basis of science is to question authority.
Anyway, aside from that, we're We're questioning for new data all the time.
On the other hand, there's a sort of reluctance to let go of existing theories.
I mean, for example, Watson and Crick knew that the structure of DNA was the key question in biology for the century, because they knew it was a genetic material.
[28:14] But years and years earlier, it had been argued that the genetic material must be protein because DNA couldn't be complex enough to be the genetic material.
And indeed, Oswald Avery showed in 1944 that DNA was the genetic material.
He died in 1955, so that was two years after Watson and Crick published the model for the double helix.
But Avery was still not given a Nobel Prize because the Nobel Committee wasn't convinced of the importance of DNA.
So more than a decade after the discovery, people still hadn't moved around.
I could cite lots of other examples, but the drift of it is that people get caught up in dogmas, doctrines, groupthink, and so the data will ultimately rule, but there can be quite a long delay before people accept new data and adopt a new theory.
[29:06] The social, economic and political context in which science occurs is often much more influential than we really want to consider.
And you mentioned AIDS earlier, and that's one of the things that we definitely saw with the AIDS epidemic was the way in which a lot of science was co-opted and the way in which a lot of scientists were really running after the money, as you kind of hinted at, because there was a lot of government funding for AIDS research in the later 80s.
Anyway, but back to this kind of wider question of, you know, science doesn't occur, it doesn't operate in a vacuum.
It operates in a context where there are egos, where there are money grabs, and where there are people who are being very strategic about what they're doing for reasons of power and so on.
And that takes me back to the question I asked a moment ago before you laid out for us how science should work, is when you look at what is happening in science right now, what are the things you're most concerned about in terms of how it functions, how people approach it?
[30:18] I'm concerned about two main issues.
One is the use of artificial intelligence, and the other is the move to what you might call big science.
[30:30] So science started out with individual researchers, really, and then individual researchers became small groups, and small groups became larger groups, but now they quite often have become really large groups.
When Ernest Rutherford discovered the proton in 1908, he published the results in a paper where he was the sole author, just him.
Now, when they reported the Higgs boson in 2012, there were 1,000 authors on the paper.
[31:00] It's a completely different way of doing science to have large-scale equipment and huge numbers of authors.
And my concern about that is not that the data aren't right or interesting it's that if you are a small cog in a large wheel do you see the whole picture and what I worry about is that scientists instead of becoming researchers who think about questions and how to answer those questions become technicians who think about how to get lots and lots of data that goes in hand with the transition from sort of what you might call small-scale hypothesis-driven science, to data mining or big data, when you become a scientist, almost the first thing you're told is a correlation is not proof of cause and effect.
But when you move to big data, you stand that on its head.
Basically, you don't look for cause and effect. You look for correlations.
Then you ask, are those correlations significant?
What can we make of them? It's a completely different mindset, and I just wonder whether down the road that is going to answer the questions that need answering.
[32:13] The other concern I have is with artificial intelligence.
So there is a principle in science that when you publish results, you should publish them with all the details that are needed for another scientist to reproduce them.
And that's an important part of the self-correcting mechanism, that other people can reproduce your results, and if they can't, then we have a problem.
[32:38] But if you use artificial intelligence to analyze your results, then there's the question of, well, how do you validate it? How do you reproduce it?
Because the AI program is, as it were, working on its own terms.
And you, the author, the scientist, may not really know how it works.
I mean, there are some very interesting examples recently where AI algorithms have been used to analyze data. And the results have been quite happily accepted by scientists.
[33:10] Whereas I suspect if a human, if a real person had analysed those data and produced the same conclusions and explained how they got there, there would have been a lot more questioning of are those conclusions correct.
So I worry that we are moving away from the principle of reproducibility to an era in which we can get mistakes embedded in science, and we won't easily be able to understand how they happened.
Of course, there's a collateral issue there, which is if we're using AI in medical work to analyze medical data and treating people on that basis, once again, how can you be sure you haven't made a mistake?
Yeah, it's interesting that the move there is kind of ceding authority to technology, when really science, as you said so eloquently earlier, science is about questioning authority.
[34:09] There's an unsatisfactory kind of shift going on there. there.
I just have one more question, but it's interesting when you're talking about the thousand authors involved in the Higgs-Boson publication.
I can't speak to that, but one of the things, of course, that you do see in medical publishing is multiple authors because that's partly because of the nature of clinical trials.
They are increasingly distributed across global sites.
But also, and there's some pretty decent research on this, I'm sure you encountered this when you were writing your book, there are a lot of invitations extended to people to consider themselves as authors, even though there are reasonably clear criteria for what constitutes authorship.
There are still ways that people try to kind of get around and over those criteria and invite people to consider themselves as authors on particular papers.
And that's another conversation, conversation, another podcast episode.
[35:13] What do you most want readers to take away from your argument in Inside Science in terms of how science functions and what we should be thinking about?
I would like them to understand that science is not predictable.
[35:28] And that attempts to direct it for a specific purpose are likely to be counterproductive.
You can simply never tell where an observation is going to lead, and sometimes it's the most arcane, apparently completely irrelevant observations that have led to major discoveries in quite distant areas.
For example, Francis Mejica worked on a very obscure bacterial system and had a great difficulty getting funding for it because everybody said, oh, it's just to do with bacteria, it's not very relevant, it's not very interesting.
And 20 years later, we have the CRISPR system for gene editing, which is a direct descendant of that work.
You just never know where science is going to go. And so when you hear political arguments that we should fund science, we should direct science in certain directions because that's where we think the results will be most useful, This is counterproductive and very ill-advised.
I think we have moved quite a way in that direction in the past 20 years.
I think we should go back to the notion that we support science because knowledge is power.
[36:41] That is a very good way to end our conversation.
I appreciate you taking time to share your wisdom and insights with listeners of Right Medicine on the much-debated question, what is science?
And Benjamin Lewin thank you for sharing your wisdom and insights thank you.
[37:09] In today's episode, Ben highlights the crucial need for the public, for all of us, to understand the principles and limitations of science, and to support science for the sake of knowledge, and to understand the unpredictability of scientific discovery.
[37:25] At the same time, while the current system of scientific publishing has flaws, it remains the best available option, underscoring the importance of transparency and self-correction within the scientific community.
However, there's a growing demand for a more centralized format for scientific communication in the digital age.
And in fact, we explored this idea of rapid publishing in episode 44 with Mark Riotto, who shared his insights on how to promote a more visual experience for disseminating clinical data in a timely, transparent fashion.
I'll include a link in the show notes.
And then we're back on Monday with Monday Mentor, which is also episode 100 of the Right Medicine podcast.
Thanks to all of you for listening to the podcast and helping us reach this milestone, which is a meaningful milestone for me because 47% of all podcasts produce less than three episodes before they turn the mic off.
And of the 3 million podcasts available only 156 000 make it past 10 episodes so thank you for helping us get there and until the next time connect with me on LinkedIn and sign up for Write medicine insider for cme and podcast updates there's a link in the show notes go gently.
Benjamin Lewin obtained his undergraduate and graduate degrees from the University of Cambridge, England. He became the first Editor of Nature New Biology in 1971, and then worked at the National Cancer Institute from 1972 to 1973. He founded Cell journal in Cambridge, Massachusetts in 1974 and remained Editor of Cell until 1999. Cell became the top-ranked journal in the life sciences. Dr. Lewin is also the author of the best-selling Genes textbook and a series of books on wine. He divides his time between New York City and London.