Base Pairs podcast
Science is a process, something we learn in elementary school as we plan our papier-mâché volcanoes. First, a hypothesis is put forward. It is rigorously tested through observation and experimentation, and then the scientists put forth their results.
But one step they overlooked at your fifth-grade science fair is absolutely crucial—the experiment should be reproducible by others using your methods and materials.
Reproducibility is the ability to re-conduct an experiment as the original research team laid out and obtain the same results, thus confirming their conclusions. If the study is not reproducible, then it is possible that the original researchers did not carefully outline their process, there were factors that were unaccounted for, or, worse still, the results were just a fluke.
Scientific journals use a process called peer review, in which other experts in the field review the paper and judge if the work passes their standards, in order to protect against irreproducible research. But sometimes even peer-reviewed research can be irreproducible, as CSHL Fellow Jason Sheltzer says in the latest episode of Base Pairs: “[S]ometimes in science you can answer a question using two different techniques and get the same answer, but you pull a third in and it gives you a different result.” The problem, Jason says, is that “science has to be internally consistent.”
In recent years researchers across multiple fields have worried about the publication of irreproducible research. There was fear of a reproducibility crisis when in March 2015 Science published a report about 100 psychology studies, wherein more than half had results that could not be reproduced. In May 2016, Nature published the results of a survey in which 1,500 scientists were asked if there was a reproducibility crisis. 90% believed there was either a “significant” or “slight” reproducibility crisis. 70% of the researchers said they had tried and failed to reproduce another team’s experiments, and over half said they couldn’t reproduce their own work. You can learn about the survey in the video below:
One result of this crisis is an increased awareness of reproducibility, and scientific institutions are taking steps to improve it. The National Institutes of Health (NIH) is putting several initiatives in place, including training modules and grant application checklists, while scientific journal eLife has started The Reproducibility Project: Cancer Biology, to independently replicate results of several high-profile cancer biology papers. The reproducibility crisis, through these initiatives, is leading to a self-correction in the way science is conducted.
Learn more about reproducibility, including how science journalists from NPR and Retraction Watch feel about this “crisis”, on this latest episode of Base Pairs.
BS: With me, Brian Stallard
AA: And we’re really thrilled to be starting this new season of Base Pairs! But first, I wanted to make a short-but-exciting announcement: Base Pairs and CSHL’s blog, LabDish, have officially moved!
BS: Cue the Music!
[m: Tada!/parade music]
AA: Oh! I uh, wasn’t expecting… [clears throat] well anyway, CSHL.edu has just undergone a huge upgrade [- BS: it’s bigger and better than ever! – ] and with it, you can find every LabDish post and the whole episode list—all two complete seasons—of our Base Pairs podcast.
BS: Right! And as always, we can still be found on SoundCloud, Stitcher, iTunes, and wherever else you get your podcasts.
[parade music fades]BS: But let’s get straight into today’s episode! And for it, Andrea and I have decided to dive into a subject that many scientists and science enthusiasts…
AA: …which I’d guess is most of you, dear listeners…
BS: …yup, it’s something that you guys may be familiar with already… and might even be a little worried about. [p] That’s because today we’re going to talk about what many are calling science’s “reproducibility crisis.”
[MT]IO: It’s a little bit like, if you provide enough information, like grandma and her recipe for meatballs, then, the meatballs should more or less come out the same.
AA: That is Doctor Ivan Oransky. He’s a Distinguished Writer In Residence at New York University’s Arthur L. Carter Journalism Institute and the co-founder of the website known as Retraction Watch.
BS: That’s him! I reached out to Ivan because he has written a lot about the so-called “reproducibility crisis,” and I was hoping he could share that knowledge with us. [p] So, of course, the first things we talked about was meatballs.
IO: Now, in terms of grandma’s meatballs, I want a little variation, a little variability, otherwise, life becomes very boring. Biology has that natural tendency—biology has natural variation, natural variability, and so that’s to be expected. It’s not that you would expect to get the exact same results every single time.
BS: But you would still expect to get meatballs… Now, this is a metaphor, obviously, but it really gets at the heart of what we mean when we say “reproducibility” in this episode.
AA: Ok, then let’s say that I, a chef, want to make the next great meatball. I’m reading my cookbook literature and I stumble upon a meatball recipe that I just HAVE to try, and then, maybe build upon. So, I set up my kitchen and get to work.
BS: Now in this metaphor—now follow me here—chef is to scientist, recipe is to paper, cookbook is to journal, kitchen is to lab, etcetera, etcetera, and so on.
AA: Right and at the end of it all, after following the recipe as closely as I can, I have made…
BS: An apple pie.
AA: [laughs] A what?!
BS: An apple pie! Or the most delicious chicken cordon bleu ever, orrrr maybe just a charred square of what was once chop meat. Whatever your result, it’s clear to you and me that that’s not meatballs. Even accounting for the natural variability of biology, like Ivan said, clearly, there was something wrong with the recipe you used.
AA: In other words, the paper’s result—if we step away from the metaphor—was not reproducible. [p] But then what? Say I find out that something is wrong with this paper. What happens then?
[MT: explainer]BS: Well, one of the celebrated parts of science is that it undergoes peer review and in turn, is self-correcting. If enough folks realize there is something wrong with a recipe, they stop using it. Maybe an edit is made. Or maybe, the recipe itself is removed from the cook book entirely.
AA: That last part is called a retraction—when a paper’s author or the journal where it’s published actually take it down. And being part of Retraction Watch makes Ivan and his colleagues particularly aware of this kind of thing.
IO: So, the rate of retractions has been definitely been on the rise. It’s actually a pretty dramatic increase from year 2000 when there were about 35 retractions in the literature out of about probably about a million papers published. The year 2016, when we had sort of the most up to date information so far, there were more than 1,300 retractions. There were about two million papers published, so, obviously the denominator increased, but, overall, that still represents a pretty significant increase in the number of retractions, and the rate of retractions, more importantly.
BS: Now, Ivan was careful to tell me that knowing the rate of retractions lets you know one thing for certain: The rate of retractions. However, he added that if he had to guess, he’d say the rising rate is –
IO: due to at least two factors. One of them is pretty clear, which is that we’re all better at finding problems in the literature. There are more people looking at papers. It’s also, certainly, at least possible that there’s more misconduct happening.
AA: Oh my. Misconduct. Ivan’s talking about the possibility of fraud. That can happen in highly competitive environments and science, of course, is not immune. However, in the case of our discussion today, Brian, we’re actually going to focus on that other part, right? The fact that we’re getting better at finding problems.
BS: That’s right. This increased scrutiny of scientific literature has led to the discovery of all these papers that, despite being driven by hard work and genuine science, STILL can’t be reproduced. In fact, a stunning analysis in 2015 from the non-profit Global Biological Standards Institute in Washington DC attracted a lot of attention. They estimated that billions of dollars each year are spent on biomedical research that cannot be reproduced successfully. They went as far as to say we might have a “reproducibility crisis” on our hands. But… that might not be the best name for it.
RH: I don’t think this is a crisis, because I think this has actually been a problem in science for a long time.
BS: And that is Richard Harris.
RH: I’m Richard Harris. I have been a science correspondent at NPR for 32 years. I wrote and published a book last year called “Rigor Mortis,” about rigor and reproducibility in biomedical research.
BS: The Financial Times called the book “Rigor Mortis” “a rewarding read for anyone who wants to know the unvarnished truth about how science really gets done.”
AA: Oh! I’ve heard of this book. It describes a lot of the reasons why research may not be reproducible and the problems that this can cause in academia and industry alike, so I was happy to hear Richard had some good news too.
RH: People are now aware about the scope and the seriousness of this issue, and I think that’s good news because I think that means people are thinking about how to make it better.
BS: However, Richard was quick to add that in the case of irreproducibility, it may be that we first want to see even more corrections and retractions.
RH: I think there’s … a little bit of trepidation about admitting errors. If it’s a serious mistake, it’s to say I would like to retract my paper and take it out of the literature because there’s something fundamentally wrong with it. The problem is that that’s very often perceived as a black mark for a scientist. Even if a scientist is really doing the right thing, saying, “Oops, I screwed up a little bit here. I want to tell the community and I want to take this out of the literature,” that’s often seen as a potential sign of fraud or misbehavior or something like that. So, scientists are very reluctant to do that unfortunately and that means a lot of papers in the literature that are problematic aren’t removed.
AA: This is a powerful reminder that scientists—when all is said and done—are people, like you or me! So, it really shouldn’t come as a surprise that mistakes happen and sometimes go undetected, ignored, or unreported.
BS: And to solve this problem, Richard explains that we need to first get rid of the stigma surrounding experimental mistakes. After all, without mistakes to learn from, how else can scientists improve?
RH: I think we have to recognize that error is part and parcel of the scientific process. We can’t pretend or we shouldn’t imagine that everything will be 100% perfect. In fact, I think if scientists strive for that, then they won’t be trying hard enough to push the frontiers … The question is can we shorten the cycle between understanding there’s an error and recognizing that—and getting the word out that actually we have a deeper understanding and that turned out not to be correct and so on.
AA: That’s a wonderful point he’s making, and it reminds me of a recent conversation I had with a biologist right here at CSHL. He told me a story that shows how learning from those kinds of “errors”—the ones that arise from the unknown unknowns at the frontier of discovery—those errors can help drive science forward. Reproducibility, after all, isn’t as black and white as your conversations about retractions may make it seem.
[MT] (chat setup from last year’s interview?)AA: That scientist’s name is Jason Sheltzer. He’s a CSHL Fellow. And he ended up in the middle of this whole reproducibility issue when he accidentally discovered that the target for a cancer drug that’s in clinical trials… well, that drug target is actually not involved in involved in tumor growth at all.
BS: Uh-oh. And it’s in clinical trials, so that means actual cancer patients are receiving this drug.
AA: Yes.
BS: What went wrong?
AA: Well, ideally scientists would have figured this out earlier of course, so you could say something went wrong in that sense—and we’ll get to that. But when I talked to Jason, this is what he said about the role of contradictory results like these in science.
JS: I think that finding contradictory results, and then understanding why you found you a contradictory result — is a very important scientific endeavor.
BS: Oh Ok. So, we’re talking about contradictory results here. Like when you made apple pie instead of meatballs at the top of the episode. That was quite contradictory.
AA: Right, but I picked this story in particular because it shows how complicated this reproducibility thing actually gets. In fact, up until Jason made his accidental discovery, it was as if everyone thought apple pie WAS meatballs…. But I’m getting ahead of myself.
[MT]AA: Jason and his team just published their second paper about this, in February, but they first reported results that invalidate the cancer drug target, called MELK (that’s M-E-L-K), about a year ago.
BS: And MELK is a gene?
AA: Yes, MELK is a gene that has the instructions for building the MELK protein. The protein is actually the part that the drug was supposed to be targeting. And when Jason’s team started those experiments, they weren’t even trying to learn about MELK, because they thought what the other scientists thought: that cancer cells are addicted to MELK and therefore getting rid of MELK makes it impossible for them to thrive.
BS: Or in other words, that our apple pie recipe makes meatballs.
AA: That’s a bit of a simplification, but yes. It’s a lot like that.
JS: There are a number of different genes that cancer cells express, which they depend on, which they are addicted to in order to grow and divide and metastasize, and do all the terrible things that they do. Sometimes when you can mutate or block the function of these cancer addictions, you can kill the cancer cells.
BS: And I’m guessing that’s what researchers thought this cancer drug did. They thought it killed cancer cells by blocking MELK.
AA: They thought so. Actually, Jason and his team were so confident that MELK was an addiction for cancer and therefore a good cancer drug target that they used it to kind of standardize their experiment, as a point of comparison.
BS: A control.
AA: Exactly. They were setting up this big screen where they would delete various genes in cancer cells—get rid of those genes entirely—and then see which genes cancer cells could live without and which ones they were totally addicted to. And when you’re designing an experiment like that…
JS: …you want, as controls, to be able to target something that is a known addiction and one of the controls that we chose for our work was this gene called MELK, which had been published to be an addiction of breast cancer. However — it didn’t behave like a cancer cell addiction, and we could mutate this gene in breast cancer cells and they didn’t seem to care at all.
BS: That must have been confusing. Hadn’t the earlier MELK results been reproduced before? They must have if the supposed MELK-targeting drug was already in clinical trials.
AA: Many different groups had independently reproduced the MELK results. Since 2005, more than 30 papers have reported results that implicate MELK as a cancer drug target. Like I mentioned before, there’s more to this reproducibility issue than simply repeating experiments and getting the same results.
JS: In biology people often talk about technical reproducibility and conceptual reproducibility. And technical reproducibility, I think means — doing everything step by step in an exact same manner and then coming out with the same results — and that’s of course very, very important for the biological literature. But one step beyond that is conceptual reproducibility, which is taking a concept or a conclusion demonstrated by an experiment and then showing that you can come to the same conclusion using a different approach.
AA: And getting to conceptual reproducibility by using different approaches to answer the same question is important, because repeating the same experiment over and over again can only get you so far.
JS: With technical reproducibility, if there is some flaw in the technique, if you use a chemical that’s not specific or if there’s some error in the protocol, well if you do the same protocol ten times the exact same way with the exact same error, you’re gonna get the same result each time but that doesn’t mean your conclusion is correct.
AA: In fact, the scientists who did this MELK research did test its effectiveness as a drug target with two different methods, so they did even achieve a level of conceptual reproducibility.
JS: But sometimes in science you can answer a question using two different techniques and get the same answer but you pull a third in and the third gives you a different result, and science has to be internally consistent, and in this case it wasn’t.
BS: Ok, then what did Jason’s team do differently from the scientists who had done all of that earlier research showing that MELK is a promising drug target?
AA: CRISPR.
BS: Ah, that is new—relatively at least. We’ve talked about this tool called CRISPR in a couple of our previous episodes because it has had an enormous impact on biological research in the few years since it’s become widely available. CRISPR—that’s C-R-I-S-P-R—is a gene editing tool that enables scientists to make changes to the genome more precisely than ever before.
AA: Which is great! But that also means the best technology that scientists had at their disposal before CRISPR was not as precise. That doesn’t mean that the older technology was useless—far from it. Jason told me about the pre-CRISPR technology that scientists used in the earlier MELK research.
JS: As a cancer researcher, we try to investigate cancer genomes in different ways and some of the previous ways that have been very popular and in many cases very, very effective have involved a technique called RNA interference.
[MT- explainer]AA: The whole idea of RNA interference was a really big deal when scientists discovered it in the late 1990s. Earlier on, the thinking was that RNA was little more than a messenger for DNA, the molecule that carries the entire genome. But RNA interference showed that a cell’s RNA often tells the DNA what to do, in a sense. It can “interfere” with the process of making proteins based on particular genes, and it does that by binding to those parts of the DNA.
BS: It basically turns the volume on a gene down, and that’s a really useful way to learn about what a gene does. It’s a way of learning by subtraction, you might say. When one element and only one is altered, is there any difference in the organism or cell? Scientists—including Professor Greg Hannon, who was then at CSHL and now at Cambridge Cancer Research UK—figured out a way to tap into the cells RNA interference system and target specific genes they were interested in. That way, they can see what cells do without that gene.
AA: Super useful. Learned a lot with it. But—
JS: Unfortunately, it also has off-target effects in some cases. And you can try to block the expression of one gene, and you end up blocking the expression of another.
BS: Off-target effects are exactly what they sound like, and they can really throw off an experiment. It can be very hard to draw the right conclusion when you change more than one thing at the same time, especially when you don’t even realize it’s happening.
AA: CRISPR produced such a different result because you can target a gene much more precisely.
JS: With CRISPR, one thing that we were able to do is we were able to generate cancer cells that totally lacked MELK expression. They had a deletion in part of the genome where MELK is encoded, so they have no MELK left whatsoever. So if you have a drug that targets MELK, and then you take a cell line that has no MELK, you would expect that cell to be resistant to that drug. We found exactly the opposite. The cells, which were MELK knockout, which totally lacked MELK expression, still remained totally sensitive to the MELK inhibitor that’s being given to cancer patients.
BS: Oh, that’s a relief! The drug still killed the cancer cells, just not the way that scientists thought it does.
AA: Right. Cancer patients may still benefit from the drug, even if no one knows exactly why. All we know now is that whatever it DOES, it doesn’t do it by targeting MELK! In any case, Jason has reached out the physicians involved in that clinical trial about his team’s MELK findings, and has been in touch with some of them via email.
BS: Well, cells that don’t have MELK at all definitely shouldn’t respond to a drug that targets MELK. That sounds like compelling evidence. But, if Jason and his team already invalidated MELK as a cancer drug target, what is this new paper about?
AA: Even though the first paper was pretty strong evidence that MELK was not a cancer drug target, they were still skeptical.
JS: There were a number of caveats and limitations to the work that we did.
AA: They had still only looked at cancer cells in a dish, not an actual organism.
BS: Experiments done on cells in a dish are really useful, but sometimes cells behave very differently when they’re part of a full, living body.
AA: That was the logical next step to see if their conclusion held up.
JS: We did a number of additional — screens — including what’s called in vivo work, doing experiments in mice instead of just in a Petri dish, where we continued to look at MELK. And our additional experiments largely recapitulated our initial observations, which are that we can delete MELK — and the cancer cells unfortunately, continued to divide.
BS: It seems like a moment that might have at least been bittersweet, not just unfortunate. After all, their results suggested that they were right about MELK! But they didn’t really want to be right about this.
AA: Yeah, Jason was not excited to be right about the conclusions from earlier experiments being wrong because…
JS: …well, because the more drug targets you have in breast cancer, I think the better it is for breast cancer patients.
AA: But being a scientist means you have to go with what the evidence tells you. That’s what the scientific process is all about, and the scientific process is really what science is. Scientists like Jason want to find ways to stop cancer, but they have to make decisions based on evidence, not what they want to happen.
JN: Showing people how evidence-based thinking works with real experiences and real stories I think is important.
BS: That sounds like Jackie Novatt! And… coins clinking? Where was she?
AA: I caught up with her over tea recently here at Blackford Bar on campus, and I had the recorder on while we talked—that’s why you heard coins in the register in the background. She was a researcher here at CSHL until a little over a year ago, and now she’s pursuing teaching at Long Island University’s Pharmacy School. As I’ve been learning about this MELK research story, I keep thinking back to this one part of my conversation with Jackie. She was telling me about her experiences leading tours of the CSHL campus and telling people about the work that scientists do.
JN: I found it important to tell people about the failed experiments too, because that’s not something that you hear a lot about. And I’m sure—I don’t know if you’ve had this taxi driver, but there was one at Rockefeller, there was one at my grad school, and there was one here, where you get the taxi driver that hears you’re going to the Lab and then berates you for sitting on the cure for cancer and then hiding it because we all want money and we want to control the world.
AA [in recording]: I knew this story before you even told it, because I’ve had the same experience.
JN: We’ve all had that experience. And the thing is, people truly believe that because we’ve been fighting the war on cancer for a long time and a lot of money has gone into it, and why the heck don’t we have a cure yet? And the reason is, it’s really hard and it’s really complicated and a lot of experiments fail. And if we only communicate that A leads to B leads to C leads to this beautiful conclusion, then why the heck haven’t we cured cancer yet? So, I think it’s really important to communicate the failures as well so that people see science as a process, not as an endpoint.
BS: It’s heartbreaking to hear this kind of misconception about the power of science.
A: It really is, because the root of it is the belief that science is powerful, which is true. But if you are a busy person who is just catching the headlines, you could get misled about what the power of science is—where it really comes from.
JS: Lots of scientific discoveries get boiled down to, oh this is a cure for Alzheimer’s, oh this is a cure for cancer, oh this a cure for heart disease. But in many instances what’s actually been discovered in the lab is insight into a biological process, is the discovery of a gene that might be important in a particular disease, the finding that a drug in a cell line model or in a mouse line model has a moderately beneficial effect. But often times the translation from what was actually discovered in the laboratory to how it’s reported in say, the newspaper or on a website, you can lose a lot of the detail and you can lose a lot of the subtlety.
BS: Those headlines can make it sound like the science is done, or we’ve reached the “endpoint,” as Jackie put it. But in reality, science reveals answers bit by bit. We always need more because that’s the only way that science can self-correct, like it did with the research on MELK.
AA: Exactly. Now, scientists know that the secret behind that drug’s ability to kill cancer cells is not MELK, but something else. Understanding what is really allowing the drug to kill cancer cells is really valuable knowledge, because it helps researchers design related drugs or fine-tune existing ones. [p] This story shows why scientists have to remain skeptical. Even when science brings us exciting things, like new potential treatments for cancer, there is always more to learn.
IO: when science works, it is absolutely, there’s no question, it’s the best way to understand the world …
AA: That’s Ivan Oransky of Retraction Watch, from the top of the show.
IO: but I will also challenge those aspects of the scientific endeavor—The human endeavor which is science—I will challenge that to be as good as I know that everyone wants it be.
BS: And we wouldn’t have it any other way! … But what does Richard Harris think about all this? After all, his book “Rigor Mortis” dives into many other causes of error and irreproducibility that we didn’t get to explore in this episode.
RH: Science is a matter of trial and error. We learn a little bit and we make an observation. We do our best to interpret those observations but then when we get more information or deeper insights or better tools, we realize, you know, we didn’t quite understand everything as thoroughly as we thought and so we improve our knowledge and our understanding of science.
—
B: That’s all folks – thanks Rich and Ivan
A: Thanks Jason Jacky…. Musicians in this episode include, Broke For Free, Podington Bear, Lee Rosevere, Ketsa, the united states army old guard fife and drum corps, and—as always—the Blue Dot Sessions.
B: We’ll be back next month with another new episode, but in the meantime, we’d love it if you’d review us on iTunes and tell us what you think of the show!
A: Were coming to you from Cold Spring Harbor Laboratory: a private not-for-profit institution at the forefront of mol biol and genetics. If you’d like to support the research that goes on here, you can find out how to do that at CSHL.edu. And while you’re there, you can check out our news-stand, which showcases our videos, photos, interactive stories, and more.
B: And and if that’s not enough, you can always pay us a visit! Between our Undergraduate Research Program, high school partnership, graduate school, meetings & courses, and public events, there really is something for everyone.
A: I’m Andrea.
B: And I’m Brian.
A: And this is Base Pairs. More science stories soon!
Written by: Sara Roncero-Menendez, Media Strategist | publicaffairs@cshl.edu | 516-367-8455