Archives Menu

Fraud and Imagination in Science

In early 1989, Cold Spring Harbor Laboratory’s Banbury Center held a workshop about the process of scientific research. Banbury meetings involve a small number of attendees and are intended to be intense, informal and off the record in order to facilitate discussion.

What about the process of scientific research had become so urgent that it required such a discussion? To understand the background for this meeting, we need to step back and consider a series of factors specific to the 1980s that some scientists and policy-makers thought pointed to a profound crisis in American science.

The participants at this Banbury meeting addressed several interconnected problems. Looming in the background was a series of scientific misconduct cases that were spectacular enough to have made national news. One of these involved a researcher affiliated with Harvard Medical School, John Darsee, who was accused of having falsified the data behind a long list of published papers in the early 1980s. An investigation by the NIH had found him guilty. More than this, the institution for which he was working at the time had had to repay the federal research money that had supported the work — a shot across the bow for other institutions if ever there was one. Alongside this case was another more complicated one, which we’ll get to in a minute.

The second big problem was related to the economic structure of American science. Federal funding was crucial for scientific research. As a result, it was clear that whatever measures the scientific community or individual institutions took to deal with scientific misconduct would have to interface somehow with official structures of accountability designed to prevent the misuse of public funds. How should that interface function? For one thing, in order to judge misconduct, you have to understand what the day-to-day process of doing science looks like. At the time this meeting took place, there was evidence to suggest that plenty of people in congress were making faulty assumptions about the process of doing science, assumptions which threatened to turn into bad federal science policy.

As a result, the discussion at the Banbury was heated. To understand better what was going on here and many scientists went into the meeting with their hackles up, it’s worth looking at the scientific misconduct case that we know was on a lot of people’s minds at the time, especially since several of the key players were there at that Banbury meeting.

The Cell paper

The story began with a paper published in Cell in 1986. One of the six coauthors was Nobel laureate David Baltimore, and it was his name that became attached publicly to the case. But the “Baltimore affair” or “David Baltimore fraud case” centered on work performed not by Baltimore himself but by his co-author, immunologist Thereza Imanishi-Kari.

A review of this case reveals two things. One, the way that science works, both in terms of how research is carried out and evaluated and in terms of the culture of science — all of this makes cases of misconduct or fraud extremely difficult to grapple with. This statement is not intended as a criticism of science or scientists. It’s simply an observation that a system that has to presume trust and collegiality in order to function is not set up to pursue fraud quickly and easily. It can fulfill this function, but the process is sometimes roundabout and difficult. Two, at the time that this series of events unfolded, it was not clear whose job it was to police instances of scientific misconduct. Who should be tasked with the responsibility of saying the word “fraud” out loud? And once it had been said, who was better placed to direct an investigation, someone who was closely involved in the work, or someone at a distance? Could institutions police such things internally? What was the role of expertise, in the specific field in question or in general, in grappling with a case of suspected scientific misconduct?

But before we get into all of that: what happened?

In the spring of 1986, a paper by a biomedical scientist at M.I.T., Thereza Imanishi-Kari, and five other co-authors appeared in Cell. Imanishi-Kari was already attempting to confirm and extend the work detailed in the paper. To this end she had hired a post-doc, Margot O’Toole, to do further experiments.

O’Toole was unable to replicate some of the paper’s findings. She came to the conclusion that Imanishi-Kari had made some errors. Historian of science Daniel Kevles, who interviewed O’Toole, says that O’Toole told him that at this point, she thought that the problems she was encountering with the experiments were the result of mistakes on Imanishi-Kari’s part, or perhaps self deception, seeing something in the data that she wanted to see but which was not there. (For Kevles’s take on the story, see his New Yorker article, “The Assault on David Baltimore,” May 27, 1996, pp. 94-109, or his book, The Baltimore Case: A Trial of Politics, Science and Character (New York: Norton, 1998). The basic sequence of events recounted here is drawn from Kevles’s account.)

It’s worth emphasizing that at least initially, O’Toole thought that it was not her call to make whether the irregularities she had found added up to fraud or not. Moving from a suspicion that all is not quite right with a piece of scientific work to a concrete accusation of fraud involves navigating an enormous area of uncertainty. You don’t want to accuse anyone unfairly, you don’t want to damage your own career, and you don’t want to look like a crank. Multiple aspects of the collegial culture of science hold you back. More than this, at the time there was no well-defined path through this uncertainty, mostly because scientists assumed, then as now, that such things are rare, and no one wants a system of scientific work in which fraud and dishonesty are expected to happen at some rate and due diligence requires you to suspect your colleagues of falsifying data.

The question of whose call this was, whose job it was to move the conversation across that line, would come up over and over again over the next few years. Many other scientists said the same thing as O’Toole, that they didn’t think it was their job to make the call about whether this was fraud or not, or that they weren’t sure whose job it was. That uncertainty was one of the reasons that whatever it was that had or hadn’t happened in Imanishi-Kari’s lab became something so difficult to deal with.

But back to the story. Did O’Toole attempt to address the issue directly with Imanishi-Kari? Yes and no. O’Toole told her story many times to many different people in many different fora. Imanishi-Kari did as well. In addition to their accounts, there are lab notebooks and other experimental data that outline what Imanishi-Kari and her postdocs and assistants did in the lab. What was done sets some boundaries for how the two scientists could have originally addressed the issue — what they would have discussed, and what they might have disagreed about. But there is a lot of wiggle-room within these boundaries.

As O’Toole told the story later, she spoke to Imanishi-Kari and attempted to address specific problematic aspects of the data and the reagents that were being used, but the latter gave her the brush-off. More than that, O’Toole claimed at one point that her PI distinctly implied that she knew that the reagent that O’Toole was having difficulty with did not do what she claimed in the published research that it did. O’Toole also said that Imanishi-Kari changed data in her postdoc’s presence, eliminating findings that didn’t support her claims.

Imanishi-Kari (according to historian Dan Kevles, who spoke to her at length), did not remember a meeting with O’Toole in which she corrected the postdoc’s data. Neither did she have any recollection of making any comments about the doubtful reagent that would have implied anything other than that she assumed that with more work or the right methods O’Toole would be able to get it to work as expected. The two communicated about the research, but it may well have been that when O’Toole brought up her concerns, the PI’s judgement was that there was no issue.

What, if anything, should scientists do under these circumstances? This is a tough question to answer. At a roundtable held about this case at Harvard in 1991, the participants, all distinguished biologists, went back and forth about questions of judgement, expertise, and responsibility. Wally Gilbert pointed out that in a case like this, it was entirely possible that the person bringing up the potential issue might not have the “sophistication” to see what “might be wrong about the work” (CSHL Archives, Tom Maniatis Collection, Box 4, Folder 8, “Roundtable of 4 June 1991,” page 8). Herman Eisen, an MIT immunology professor who was asked to review O’Toole’s summary of her concerns based both his scientific expertise and his reputation for being a good mediator (Kevles, The Baltimore Case, 85), said that he thought that this was the kind of problem you might encounter when reviewing a paper for publication, and had he received O’Toole’s statement as a commentary on a paper he was reviewing, he would have sent the paper back to the authors to address the issues O’Toole raised (“Roundtable,” 19, 40). In other words, there was an argument to be made that the scientific process as it already existed contained appropriate mechanisms for addressing this kind of problem. At the same time, there was also an argument to be made for stepping back from that familiar and comfortable process. Wally Gilbert insisted multiple times that it was necessary to distinguish between the data and the interpretation of it, and he thought that Eisen, and MIT, had made a mistake in addressing the Cell paper on that second level. He thought that the key issue was “is what is published in the paper a true reflection of what somebody actually saw in the lab independent of the interpretation of it” (9). In other words, there was the data, and there was the science, i.e. the interpretation of the data, and the issue in this particular case was not was the science wrong, or partially wrong, or whatever it might have been, but rather did the data exist in the form that Imanishi-Kari claimed it did. Finally, there was the question of what line had to be crossed in order to say that someone had committed fraud. Both Mark Ptashne and Herman Eisen mentioned the impossibility of seeing into another person’s mind (44, 47). The term ‘fraud’ suggests an argument about mindset, that it’s necessary to know the intentions of the suspected fraudster. The term ‘scientific misconduct’ on the other hand, might be defined in a way that one could determine based on actions and records whether it had occurred or not, without seeing into anyone’s mind.

As the participants of this 1991 roundtable tried to come up with a summary of their discussion, Herman Eisen returned to Wally Gilbert’s comment that it was a mistake for him (Eisen) and MIT to have initially addressed the problem on the level of scientific judgement and interpretation, that “I should have been more sensitive,” and realized somehow that this was the wrong level at which to attack the problem (61). Maybe he was right, maybe not. But his comments about how MIT and the immunologists familiar with the work point us back to an important point in the story, a meeting in May of 1986 at which some of Imanishi-Kari’s colleagues attempted to address the problem in precisely the way that Eisen had also assumed it should be addressed.

Three specialists in immunology, Brigitte Huber, Henry Wortis (both from Tufts) and Robert Woodland (University of Massachusetts Medical Center) got together in Imanishi-Kari’s lab at MIT and discussed the data. They concluded that O’Toole was correct in that the effects of the reagent she was having trouble with were in fact overstated, but given the reagent’s role in the experiment, Imanishi-Kari had no reason to artificially embellish its effects in this way, even had she wanted to spruce up her results. There were one or two other minor problems with the paper, but the three immunologists concluded that the problems with the paper were not sufficient to undercut its central claim. O’Toole disagreed. By this time, she had decided that the problems she saw in the Cell paper warranted a published correction.

The turning point in the story came not long after this with the involvement of two other people, whose personal crusade against fraud in science transformed what most of the participants still understood to be a disagreement about the interpretation of data into an object demonstration of the danger of assuming that there is little to no gray area for interpretation, judgement and imagination in the evaluation of scientific data.

O’Toole spoke about the conflict to a man named Charles Maplethorpe, a disgruntled former PhD student of Imanishi-Kari’s. O’Toole intended for this conversation to be confidential, but Maplethorpe spoke about it to an NIH scientist named Ned Feder, who along with a colleague, Walter Stewart, formed a duo with a self-assigned mission to bust scientific fraudsters. In the past, the two had uncovered some genuine cases of problematic data. Their actions as they pursued the Imanishi-Kari case, however, would always not place them in a favorable light. When they heard via Maplethorpe about O’Toole’s concerns, Feder and Stewart smelled blood and more or less badgered O’Toole into providing the data and other materials in her possession. Convinced that the work was as fishy as O’Toole claimed, they contacted the authors of the Cell paper, and the situation quickly mushroomed from there.

It was Feder and Stewart’s involvement that pushed the case into the public eye. Both men were scientists by training, but their approach to the case involved casting it as a conflict between a secretive, hidebound scientific community that was not capable of policing itself and a scrappy team of underdogs, including themselves and Margot O’Toole, who were out to set the record straight. At times, their actions made sense to the scientists involved in the dispute. David Baltimore, one of the co-authors of the Cell paper, considered Feder and Stewart obnoxious and wrong-headed, but he supported one idea of theirs, which was to have the NIH set up an informal investigation by immunologists who were in a position to understand the research. But at other times, the two refused to play ball, or at least refused to play the same game that the NIH and many scientists wanted to play. The NIH approved of the suggestion to investigate, but the agency also asked Feder and Stuart to hold off publishing an analysis they had written of the Cell paper until the case could be resolved. Rather than falling into line with the NIH — an agency they both worked for — Feder and Stuart decided at this point to enlist the support of the ACLU and began giving talks about the controversy on college campuses. (Cell and Science both rejected their analysis of the paper.)

In early 1988, the duo’s path crossed with that of another crusader, Representative John R. Dingell, a Democrat from Michigan. Dingell was the head of the Energy and Commerce committee. Among other responsibilities, this congressional committee oversaw the budget of the NIH. Like Feder and Stewart, Dingell was a crusader. This was not necessarily a bad thing, as far as the public interest was concerned. Dingell’s accomplishments included pursuing, “extravagant defense contractors, corrupt bureaucrats and illegal influence peddlers” who “he thought were wrongly benefitting from their access to taxpayers’ money” (Kevles “Assault,” 101). At the same time, this sense of mission meant that when Dingell heard about Margot O’Toole, he and his aides were very much inclined to believe that they had uncovered a case of scientific fraud being covered up at a research institution using public funds — finding such a case had been on Dingell’s priorities list even before Stewart and Feder got in touch with members of his staff.

The resulting congressional hearings about the case unintentionally highlighted an important question. If there is a situation in which a scientists is suspected of fraud, what is the best way to determine whether fraud has or has not occurred? More than that, what is fraud in science? There are cases that are very cut and dried, such as the John Darsee case in the early 1980s. Darsee’s preternatural productivity had moved some of his junior colleagues to secretly observe him at work, and they saw him create data from scratch (Kevles, The Baltimore Case, 110). Even easier to spot was the case of William Summerlin in the 1970s, who had colored dark patches on his experimental mice with a marker to mimic the skin grafts he was attempting to carry out (Wikipedia, William Summerlin). But such spectacular cases are the exception rather than the rule. Scientific misconduct, when it occurs, is typically more subtle than that, and is often difficult to distinguish from someone simply being wrong. Stewart himself had first-hand experience with this gray area. In the early 1970s, he had peer reviewed a paper by two scientists whose work was at the time receiving significant scientific and press attention. He determined that the experiments that the authors described did not support the claims they were making, and Nature published his conclusions alongside the paper (Kevles, The Baltimore Case, 99-100). But no one, in this case, was accused of misconduct. Scientists make mistakes, including errors of judgement. How mistaken do you have to be, or how much motivated reasoning do you have to engage in, to cross the line into misconduct?

Related to the question of how we define fraud is the question of how science produces knowledge. David Baltimore thought that Dingell and his staff had a faulty understanding of how science worked. They assumed that science produces knowledge almost mechanically — that scientific data points to obvious and unambiguous interpretations, and thus fraud and falsification are easy to recognize (Kevles, “Assault,” 102).

From David Baltimore’s point of view, the questions O’Toole raised about Imanishi-Kari’s interpretation of their experimental data was a question of interpretation, a normal scientific disagreement that would have been best resolved through further research. Congressional hearings were not a method of truth-finding suitable for this particular problem. More than this, as the dispute and the related investigations stretched out over nearly a decade, Baltimore repeatedly stated publicly that he considered this kind of inquisitorial behavior a threat to American science. Feder and Stewart’s style rubbed him the wrong way. Initially, Imanishi-Kari was required to defend herself in hearings in which she was not given access to the evidence and was not allowed to cross-examine those testifying against her; she had to prove her innocence rather than the other side having to prove her guilt. In addition, she was an immigrant to the US whose English was not always sufficient for her to be able to explain herself well. For Baltimore, the whole situation evoked shades of McCarthy.

Feder, Stewart and Dingell’s proceedings did at time approach the absurd. They involved the Secret Service in the analysis of Imanishi-Kari’s lab data, even though the expertise of Secret Service data analysts was not suited to the evaluation of the type of material that life-sciences research produced and the resulting analysis bordered on the ludicrous. More unsettlingly, during the Banbury meeting here at Cold Spring Harbor Laboratory held to discuss the issue of scientific misconduct, Stewart compared those who let fraudulent science slide or did nothing to defend whistleblowers to those Germans who had stood by and allowed the Holocaust to happen.

In the end, after years of hearings and appeals, it was ultimately found in 1996 that the available evidence did not support a charge of fraud on Imanishi-Kari’s part. Her data was typical scientific data, complex and not always perfectly consistent, and as the president of one of the investigating panels pointed out, the data that was alleged to be fabricated included plenty of data that didn’t support Imanishi-Kari’s argument. If she was going to fabricate data, the investigators asked, wouldn’t she have fabricated the data that she wanted to see? (Kevles, “Assault,” 108). She might have been sloppy in her data collection, and she might have made analytical choices that other reasonable people found questionable, but this type of thing alone was not enough to sustain a charge of deliberate falsification. At the same time, there are a number of well-respected scientists who continue to believe she was guilty of misconduct and that David Baltimore’s conduct during the investigation, including his defense of his co-author, was profoundly misguided.

The dispute over the data behind Imanishi-Kari et al.’s Cell paper offers a window into two issues crucial to how we understand the life sciences, and science in general, in an age of big science, federal funding, and intense competition.

First, it is hard to create a structure for the speedy, impartial and rigorous pursuit of possible scientific misconduct that meshes easily with the structures of collegiality and professional friendship that make science work. Scientists understand science as (most of the time) a community, in which openness and collegiality play an important role in the success of any research project. Hand in hand with this comes a reluctance to suspect fraud on the part of one’s colleagues. This is necessary, in a way — you can’t conduct research constantly suspecting everyone around you of being a liar. We know that this cultural of openness and collegiality is something that many scientists consider central to the success of science; anyone who was involved in biotechnology and commercialization in the 1980s and 1990s, for example, will be able to tell you about how much of the opposition to these things was rooted in a deep fear that commercial ties would destroy specifically this key aspect of scientific culture.

Second, there is the elephant that has always been in the room, in one way or another: the tension between the expert judgement that has always been a part of science and an idea that has, for some people, become synonymous with the word ‘science’: objectivity.

Objectivity itself has a history. Only in the 1800s did two aspects of this concept emerge that today seem so natural and self-evident that we rarely question them. The first is that scientific objectivity stands in opposition to concepts like creativity and imagination. The second is that objectivity requires the removal, as much as possible, of the person and perspective of the investigator, and the more these two things are flattened away, the more objective, and thus reliable, the results become (for more on this see the work of Lorraine Daston and Peter Galison, referenced below). This is the root of the tendency in scientific writing to avoid constructions such as “I/we did this experiment” in favor of “this experiment was done” even though the audience for the paper knows full well that someone designed and performed the experiment. This is also perhaps behind the appeal of using AI for various intellectual tasks that used to be performed by humans. It gives the (false) sense that no one, really, is doing whatever work the AI is tasked with, and thus the results are in some way more objective and thus better.

Seen from this perspective, scientific misconduct is the opposite of this understanding of objectivity. Rather than removing yourself as much as possible from the data and the analysis, you are inserting yourself. In the extreme case of a scientist inventing data out of whole cloth, like the abovementioned John Darsee in the early 1980s, the data is coming from you and not from the world — in a sense, you have become an artist, not a scientist.

But this opposition between fraud and objectivity is in tension with the very real need for creativity and imagination in science. Despite the power of the familiar idea of science as depersonalized, objective and almost mechanical in its production of knowledge, science has always been a creative enterprise, involving guesswork, insight, intuition and judgement, bringing it closer to art than the non-scientific public understands. In this sense, the most cutting-edge and adventurous science, the kind that requires both creativity and judgement, is closer to fraud rather than further away from it, because it requires more of the individual mind of the investigator. This is likely one of the reasons that scientific misconduct is at once highly threatening and difficult to police.

The Banbury Meeting

Let’s return briefly to the meeting at CSHL’s Banbury Center in January 1989.

This meeting took place while the controversy surrounding the now-infamous Cell paper was still ongoing. That controversy, and previous scandals involving scientific fraud, had raised a long list of difficult questions for scientists and anyone interested in science policy. Was there a problem with fraud in science? Molecular biologist and founder of Cell Benjamin Lewin noted in an editorial about the Banbury meeting that “the perception that fraud is a major problem in biomedical science has taken hold in political circles,” (Lewin, “Travels on the Fraud Circuit,” 513) and indeed that talking about scientific fraud was turning into a kind of industry. This did not mean, however, that all questions about whether and how often scientific fraud occurred had been answered. Certainly scientists did not necessarily think that their ranks were full of fraudsters. And even if fraud was occurring on a meaningful level, was it possible to address the problem in such a way that the cure was not worse than the disease? Many of the proposed solutions, such as random audits, assigning journals or reviewers the responsibility of ferreting out falsified data, or, more fundamentally, assuming an attitude of guilty-until-proven-innocent with regard to colleagues’ work would make it difficult, if not nearly impossible, to conduct scientific research (Lewin, “Fraud and the Fabric of Science”).

Many scientists were skeptical that fraud was occurring at the levels that some in politics claimed. Lewin observed that scientists tended to react “apoplectically” to the idea that fraud was widespread (Lewin, “Fraud and the Fabric of Science”). Aside from what might have been a knee-jerk reaction, if an understandable one, to the suggestion that things were going on in their own labs and among their colleagues that they were perhaps willfully blind to — no one wants to be told that — it was certainly the case that scientists were reluctant to say the word “fraud.” O’Toole had spoken of errors and self-deception when criticizing Imanishi-Kari’s work, but had been very reluctant to go beyond that. Journalist Barbara Culliton, writing about the case in Science in 1988, emphasized this fact, and was one of many observers to note that it was the politicians and their allies who were eager cross the line and start using words like “fraud” and “misconduct” (Culliton, “A bitter battle over error”).

This reluctance to cross that line may be rooted in the expertise that derives from the experience of doing science. Simply because something is not immediately reproducible does not mean that it is fraudulent (Lewin, “Fraud in Science,” 1). As Jan Witkowski, who organized the Banbury meeting, points out, “just because you can’t reproduce somebody else’s work doesn’t mean either that you are incompetent or that they fabricated” their data. It might well mean that “there may be some interesting biology” at work that you simply don’t understand yet (Jan Witkowski, interview with CSHL historian, 7 June 2024).

Mark Ptashne (center) and Richard Axel (right), courtesy of the Banbury Center
Related to this point is another one that many scientists made over and over, that being mistaken, or being wrong — not to mention heated disagreements over who is right and who is wrong — is a normal part of doing science. Both informal institutional reviews of the science behind the disputed Cell paper ended with the conclusion that this was an interpretation of data issue and not a matter of misrepresentation (Culliton, “Bitter battle”). Peter Farnham of the American Society for Biochemistry and Molecular Biology noted in an editorial in FJ Public Affairs in 1988 that the congressional hearings as yet had produced “no specific evidence…about the paper that showed fraud or other misconduct,” and that after reading David Baltimore’s open letter to his colleagues on the matter, many of the latter had concluded that Congress was simply failing to distinguish between misconduct and typical scientific disagreement and debate (Farnham, “Recent Actions Related to Scientific Misconduct”). Non-scientists, in other words, were assuming that science produces truth in an entirely unambiguous way, and that someone having doubts about a result was a sign that something had gone wrong. Richard Axel of Columbia University commented after the Banbury meeting that “the government people didn’t have a good sense of how science works…They couldn’t come to grips with the fact there are no absolute truths…that our data reflect closer and closer approximations of what might turn out to be true” (quoted in William Booth, “A clash of cultures”).

The phrase “clash of cultures” was used over and over with regard to the conflict in general and the get-together at Banbury in January 1989 in particular. The formulation is dramatic, but it does point to one of the core goals of the meeting. As the meeting’s organizer, Jan Witkowski, put it in his correspondence with invitees, the “meeting is prompted by the current controversies in biological and clinical research over error/misconduct/fraud,” but in contrast to “previous meetings in this area,” which “seem to be dominated by lawyers, ethicists and sociologists of science,” this one would involve a higher proportion of scientists (Jan Witkowski to Allan Franklin, courtesy of the Banbury Center). Moreover, “the meeting is not entitled ‘Error and Fraud in Science,’ because that is incidental to the main purpose of the meeting. ‘The Process of Scientific Research’ is a better description because I want to try to show the congressional staffers how science is done…the way in which data are checked not by slavish replication, but by building new experiments on earlier data, how error is an inevitable feature of science, and so on…My reading of the reports of other meetings makes me think that at present the staffers have heard how sociologists, lawyers and bureaucrats think science is done!” (Witkowski to Efraim Racker, 3 November 1988, courtesy of the Banbury Center).

In other words, the goal of the meeting was neither to rehash the Imanishi-Kari case nor to focus solely on fraud. Rather, the goal was to take a step back and talk about how science is done. The meeting was to be made up mostly of scientists and congressional staffers, with a few journalists and others. If the congressional staffers had a more accurate understanding of how science is done, how scientists move from experimental data to claims about how the natural world works, these staffers would be better equipped to attack the problem that they wanted to attack, which was fraudulent science that misused government money.

Did it work?

Banbury meetings are distinct from most scientific or other academic conferences. They are small and informal, with the goal of encouraging a lively exchange of ideas among the participants. This meeting in particular was part of a series of Banbury meetings funded by the Sloan Foundation with the intention of bringing together scientists and science journalists and congressional staffers, i.e. scientists and people with a significant interest in science but perhaps no formal scientific background (Witkowski interview).

Jim Watson (left) and Walter Stewart (right), courtesy of the Banbury Center
Norton Zinder, who was among the participants, described the meeting as “a hats-off, hair down kind of meeting” (Booth, “Clash of Cultures”). Jan Witkowski comments that “it got quite heated.” William Booth, writing for Science, described participants’ impressions of two very different communication styles, with one participant noting that “scientists are more comfortable yelling at each other.” Fraud-buster Walter Stewart was there, and as perhaps was to be expected, won no allies among the scientists. Richard Axel of Columbia commented to Booth that Stewart felt so strongly about the problem of fraud that he had lost perspective and as a result “has no real sense of the problem or how to deal with it.” Stewart demonstrated this very lack of perspective by comparing what he saw as scientists’ refusal to acknowledge the extent of the fraud problem to the willful ignorance of Germans who had looked the other way during the Holocaust (Booth, “Clash of Cultures”). Looking back, Jan Witkowski suspects that the scientists hadn’t “quite grasped what was going on,” that there were a long list of other interests that had to be considered in the process of doing science. They also “thought the congressional staff were ignorant.” The core of the scientists’ frustration was that the staffers appeared to believe that “science is very much a sort of mechanical process, you’ve got a problem, you devise experiments to tackle it,” and the data will automatically tell you what the truth is. Whereas in reality it’s very different. At the same time, there may have been some unacknowledged reluctance on the part of scientists to ask hard questions about scientific misconduct, since this would involve “admitting that the structure of science isn’t quite as lily white as one would like it to be” (Witkowski interview). Benjamin Lewin, writing for Cell a few months after the meeting, emphasized that “none of the scientists was dismissive of the problem” (Lewin, “Travels on the Fraud Circuit). William Booth’s conversations with participants after the meeting included a talk with Walter Gilbert, who described fraud as “a small but very real problem.”

Wally Gilbert (image courtesy of the Banbury Center)
The Banbury meeting did not cause congressional staffers to develop a radically more insightful perspective on science, and neither did it produce any sudden conviction among scientists that fraud and misconduct were rampant. Instead, the biggest achievement of the meeting was probably to convince some scientists that it would be wise to distinguish between, on the one hand, the precise nature and extent of scientific misconduct, which one could debate endlessly, and on the other, the seriousness of the political perception of such things. Universities and other research institutions needed to set up effective procedures to deal with fraud before such things were imposed on them from the outside.

The argument over policing fraud is reminiscent of the debate over regulation of recombinant DNA experiments in the 1970s. Scientists were up in arms about potential federal regulation and what they saw as at threat to intellectual freedom, while some politicians and activists were concerned that there was a threat that scientists were refusing to acknowledge. Interestingly enough, genetic engineering provided part of the context, although in a different way, for changing public and scientific perceptions about falsification of research result. During congressional hearings about scientific misconduct, one participant raised the concern that increased commercial presence on university campuses would create incentives to falsify research for financial gain (Farnham, “Recent Actions”

Today, more than thirty years later, all research institutions have formal structures in place to address scientific misconduct. The federal government has an agency, the Office of Research Integrity (See the agency’s website: ORI and Wikipedia: Office of Research Integrity), dedicated to the problem. The 1980s were a key period of change as far as this development is concerned. The combined effects of spectacular and difficult fraud cases, questions about the role of government, and meetings like the one at Banbury all worked together to radically change the landscape. Today, a researcher in the same position as Margot O’Toole was in the mid-1980s when she began to question her supervisor’s data would have a clear procedure to follow — in 2024, a similar case would likely unfold very differently.

Finally:

Curious about how scientists move from data to insight to published papers? Wondering about the role of creativity and judgement in science? Our archival collections at the CSHL Library and Archives contain a wealth of material on these topics, including original laboratory notebooks and other research materials from a long list of life science researchers. Find out more via our Guide to the Collections or contact us.

References

Archival Materials:

CSHL Archives Tom Maniatis Collection
CSHL Banbury Center records (digital and physical) provided by the Banbury Center
Interview of Jan Witkowski by Antoinette Sutto, 7 June 2024

Published Materials:

Booth, William. “A Clash of Cultures at Meeting on Misconduct.” Science 243, no. 4891 (February 3, 1989): 598.
Culliton, Barbara J. “A Bitter Battle Over Error.” Science 240, no. 2860 (June 24, 1988): 1720–23.
Daston, Lorraine. “Fear and Loathing of the Imagination in Science.” Daedalus 127, no. 1 (Winter 1998): 73–95.
Daston, Lorraine, and Peter Galison. Objectivity. New York: Zone, 2007.
Farnham, Peter. “Recent Actions Related to Scientific Misconduct.” FJ Public Affairs, November 1988.
Kevles, Daniel J. “The Assault on David Baltimore.” The New Yorker, May 27, 1996.
———. The Baltimore Case: A Trial of Politics, Science and Character. New York: Norton, 1998.
Lewin, Benjamin. “Fraud and the Fabric of Science.” Cell 57 (June 2, 1989): 699–700.
———. “Fraud in Science: The Burden of Proof.” Cell 48 (January 16, 1987): 1–2.
———. “Travels on the Fraud Circuit.” Cell 57 (May 19, 1989): 513–14.