Duke Researchers discuss in DOSI town hall questionable research practices and how to fix them

Author: 
ASIST
Author Email: 
The research town hall “Preserving the Trust in Science. From Regrettable Research Practices to Rigorous and Reproducible Research”, hosted by the Duke Office of Scientific Integrity on February 19, 2021 brought together over 240 Duke researchers. Please read below a selection of the main discussion points (Note: these are transcribed and slightly edited fragments of the event recording.)

 

Patrick Charbonneau, PhD, Professor of Chemistry and Physics, Trinity College of Arts and Sciences

profile

 

On Questionable Research Practices When Working with Data

“The research that my group does involve pen and paper theory as well as data generated by computer codes that we write. In that context, the most prevalent questionable research practices that we typically encounter--assuming that we have properly designed the research, that we are actually capable of writing computer codes properly, and that we can statistically sample our data sufficiently--are related to the following:

              • How did the data flow from the computer code we wrote to a figure in a paper? Was any copy and paste involved? Was there any misaligned or otherwise corrupted entries?

              • Given a model we're trying to assess with data, have we biased its analysis by choosing a particular subset of our data?

              • On the longer term, how are we maintaining and documenting our (published) data?

Years ago I approached the Duke Libraries to see if there was a way to deposit some of our data, so that it would be better organized, better documented, and better archived. Having the Data Repository teach us (and impose on us) good practices has given me an immense peace of mind with respect to these three main concerns. In particular, the deposited data is now more trackable. If we made mistakes along the way, they're easier to track down and they're honest. I put out there what I did clearly. If others find faults in it, then we can discuss the matter openly.”

Stacy Horner, PhD, Assistant Professor of Molecular Genetics and Microbiology, School of Medicine

profile

Most prevalent QRPs in RNA Biology Research

“We study RNA virus host interactions, RNA biology cell biology and validate immunity viral host factors. We do that in cell culture, so we do generally wet lab experiments.  Some of the questionable research practices that we tend to see are:

  • Sloppy overall design of experiments
  • Improper validation of reagents
  • Poor data organization, which is the largest QRP that we see.

Data organization is very important. If there are 52 weeks in a year, and you are a graduate student who does two experiments per week for five years, this means 600 experiments. How are you going to keep track of how you did those experiments and where that data is? How you're going to access it later? There are a lot of inadvertent mistakes that everyone makes. We try to prevent these inadvertent mistakes or catch them. Also, when you're sharing data between groups it's really important to know what the provenance of data is and how it's analyzed.

Why these practices occur has to do with poor training and little to no oversight. If you're in a large lab and there is no oversight, people aren't going to have their data organized, because nobody knows how to do it.

In my lab we'd like to have written standard practices. We all have data management plans. Having these written standard practices, about how the data should be organized, how we do experiments and how we report the data is really important. And it is really important to have a culture of communication in the lab and to reinforce good lab practices.

We try to promote a culture where hypotheses are tested and it's okay to be wrong; to have not only one hypothesis, but be open to alternative explanations and, as we're thinking about our data, even if we want a certain result, it's really important when we're designing experiments to think about what are the possible results, what it could mean and why. We have regular meetings to validate our experimental design, we list our experiment numbers so that everybody knows where they can actually go to find that raw data later. We discuss controls - what's a good positive control and what is a good negative control?

I think it's really important in the lab meeting that everyone understands the experimental design. I've heard people say “I'm just not smart enough to understand how this experiment is designed”, but it's really important that in our lab everybody knows, because that will lead us to avoid questionable research practices. In the lab meetings I go over the raw data, especially for a new essay, and this will mean pulling up the excel spreadsheet, looking at the numbers, seeing what the trainee has done to the numbers to get to the next step, to really understand how that data is being analyzed. It is really important to actually not trust that famous Professor X who sent you this key reagent and assume that it is right, but actually try to validate all those things in house before you do an experiment, so that you know what you're working with.

I think it is up to the PI to provide the framework for data, lab organization and check-in regularly that this is being maintained. We use electronic lab notebooks, have a file naming structure, a table of contents where our experiments are listed; anytime we have a paper, I can actually go back and find the experiment number and then go into that folder and find all the raw data. Also, it is really important to have a clear checklist for when the trainees leaves the lab so that we know where their data is, so they don't just take it with them or delete it by accident.

Before publication, it is important that the PI examines all of the raw data and makes sure that it matches the final graph or figure.  We can’t prevent somebody who wants to make up data, but what we can do is to make sure that we don't publish something that's totally wrong.”

David Stephen Pisetsky, MD, PhD, Professor of Medicine and Immunology, Duke Cancer Institute and Duke Human Vaccine Institute

“We Rely on the Overall Culture of Trust and Openness in Science Collaborations“

profile

Science is a system that relies on trust at all levels, it relies on openness and it increasingly involves cooperation. We have very informal rules and practices that govern our conduct. We do not have really a form of legal structure that will also cover collaboration. I have many collaborations and it is in large based on a handshake - we do not sign contracts the way people in industry do.

We rely on the overall culture of trust and openness to have these collaborations go forward.

We are in the assumption that everybody is motivated to find the truth or find new findings. And we also rely on people to respect the rules, despite many potential hazards in the system.

Team science is increasingly important and it relies on networks of people, who many times do not know each other and they don't know what the other parties are actually doing.

Intricacies are not well understood by the people who report, the data is actually operated by machines.

So, most of us have a relatively superficial idea of how the machines work. It is similar to a car: we sort of know how a car runs, but in detail, we don't know how a brake system works or how the fuel injection system works; but nevertheless we can use it.

Also, big data is derived by algorithms and statistical analyses which are also not well understood by the people who report the data, but it may be understood by the people who analyze this data.

Ultimately teams are needed to assemble, collect and analyze the data, but the functions are often compartmentalized.

Communication may be limited and it's also constrained by a lack of common ground for discussion: I don't understand the statistics, the statistician doesn't understand my question, but we do the best we can.

I would like to focus now on the impact factor and its influence on the culture of the science.  A journal would be more impactful if the papers are cited; the impact factor becomes somewhat of a measure of the quality of that journal.

Unfortunately, however, it's become a driving force in the way people publish because they are motivated to publish them in high impact journals. While the idea of an impact factor was really to help people evaluate the quality of science it's now a way in which journals are evaluating themselves. The journals embrace practices in order to increase their impact factor, but unfortunately what that does is to frequently limit the number of publications and make it much more difficult to publish.”

Kevin Weinfurt, PhD, Professor and Vice Chair for Research, Department of Population Health Sciences, School of Medicine

On the Questionable Use of the Validated Measure

photo weinfurt

Weinfurt.jpg, by ec141@duke.edu

“I've been in this field for about 20 years and I've been hearing increasingly the term “validated measure”. “We used a validated measure of patient provider communication”, for example. I'm speaking specifically about the use of this term in the context of health research. There's good reason to understand why there was a focus on whether something appears to be validated or not. Because, as we look at the history of these types of measures,  in the old days, you get a couple folks coming back from a clinical conference with an idea to get a measure of something and they sit down and write something down. Then they use that measure in their research. As a result, we had a tremendous number of measures of poor or uncertain quality that were being used in research around the ‘90s.

One pedantic reason for not using the term “validated measure” or using it cautiously) is that it's technically incorrect. Within the context of the measurement, we defined validity the degree to which evidence, in theory, support the interpretations of test scores for proposed uses of tests.

For example, of the six minute walk test, if you say it's a validated test, someone might be using it to measure how far you can walk in six minutes, or mobility aerobic capacity, or endurance physical functioning.

I've seen studies, where the same test was used to try to address all these different concepts, so when I hear that it's validated I'm not sure exactly what it's validated for.

This second point which may be a provocative statement, is that anytime someone says “validated measure”, all we know is that at least one paper or conference abstract was published about the measure. These days, there are over 1 000 health measures that have a paper published and the amount and quality of validation evidence in those papers can vary wildly.

The third point is that dividing measures into validated and not validated it leaves us sometimes to assume that the measure is working without carefully assessing it: What are the limitations of this measure? How might these limitations impact the conclusions of the study? Is there a better alternative?”

Yousuf Zafar, MD, Associate Professor of Medicine, Public Policy, and Population Health Sciences, School of Medicine

How Clinical Benefit Is Often Overstated in Clinical Trials?

yousuf zafar photo

zafar.jpg, by ec141@duke.edu

Watch Dr. Zafar's talk during the February 19 town hall

Yusuf Zafar, MD, about "Overstating research results in clinical research", by ec141@duke.edu