In the (barely) current Spektrum der Wissenschaft (German version of Scientific American) there is an article by Michael Springer on self(quality assurance) control in science, which describes very clearly the problems involved in ensuring that what is published in scientific publications is actually correct. Were the underlying data collected correctly or was the work done in an imprecise manner or did important data possibly have to be omitted because they did not agree with the preconceived opinion? Were the correct conclusions drawn from the data or could other interpretations have been possible? Can the experiment be repeated and checked by an independent group (reproducibility)? These are typical questions that have to be dealt with when evaluating a scientific study. Because the own colleagues, with whom the scientist competes for research funds – are best able to judge this, the peer review process was introduced, which Michael Springer describes well in the text below (Michael Springer, Self control with small errors).

But Mai Thi Nguyen-Kim has also dealt with the topic in a video. A few core statements (translated from the video which is in German):

  • There is science and there is scientists. And the problem with scientists: they are people. I have nothing against people. Some of my best friends are people. But people can never be 100% objective.
  • People have this funny stuff, emotions. That is why there are precautions in science, … so that science is really objective
  • Methods are what make a study better or worse
  • Even as a layman, you should always try to understand scientific methods as best you can in order to classify science. Scientific results alone do not say much as long as you do not know the scientific method.
  • Science is only legitimate when it is properly published. A publication must first pass the peer review. Peer review is not a perfect process.
  • A tough but objective discourse is dealt with much more openly in science than elsewhere.
  • It is a good thing that it is not individual authorities that can assert themselves, but that the overall amount of data must be right.
  • Why should we simply trust experts in the media? … None of the points I have mentioned (the scientific method of quality control) has anything to do with expertise. … Expertise is not a criterion for trustworthiness.
  • Scientists can also have political ideologies that cloud their objectivity. (She cites various examples of scientists who, because of ideologies, formed opinions that would not stand up to scientific scrutiny).
  • Few experts with confused opinions (a Nobel Prize winner in chemistry(!) who denies AIDS) can do a lot of damage, because the media love anything that gets out of line. … Sucharit Bhakti, who is currently selling many books with expert status but scientifically untenable false statements. (The universities of him and his wife have more or less clearly distanced themselves from the two).
  • It doesn’t occur to me why within science, every inaccuracy, no matter how small, is questioned five times and everything has to be watertightly documented, but when professors leave this bubble and go public, there is no longer any control and verification, because freedom of opinion applies here. (We have written an article about this: I do not need facts, because I have an opinion
  • We journalists must become better at giving more attention to sensible voices.
  • Trust is good, control is better.
  • Who can guarantee us that an expert will not sell his or her individual opinion as recognised expertise?
  • That is why it is so important that the critical quality management that is practised within the scientific community also applies to science communication aimed at the general public. Science is not based on trust, but on questioning, control and review. That is what makes science so reliable.
  • As long as the same standards of objectivity and reliability do not apply in science communication as in science itself, more scientists in the media will not bring more enlightenment but more confusion.

Michael Springer, Self control with small errors

The peer review process is considered the gold standard for publishing. Who within the scientific community would know better than their closest colleagues (peers) to rule out methodological errors, sloppy handling of data or even deliberate deception? Who could review an article more reliably, i.e. expertly evaluate it?
Surprisingly, it is not so long ago that technical papers saw the light of day without any internal preliminary review. It was not until the 1970s that the participation of experts in the publication process became commonplace. But when we hear today that “a new study has shown …”, we can confidently assume that this is the case: The work was subjected to a peer review.
How does that work? Typically, the editors of the respective journal submit the draft article to two or three reviewers, who are supposed to review it within a given time, often including multiple feedback with the authors. The reviewers do this anonymously and on a voluntary basis, i.e. unpaid. However, some journals are gradually beginning to thank the reviewers, at least by name, provided they agree – because their workload increases with the growing flood of publications. Editors are finding it increasingly difficult to find suitable and willing experts (Nature Astronomy 4, p. 633, 2020).
Assuming that the reviewers have waved the article through in a friendly manner with a few changes, other researchers will soon find more than a hair in the published soup. For example, they discover suspiciously “beautiful” – statistically too little scattering – data, identical images in different contexts or plagiarism from other articles. This is not only highly embarrassing for the original reviewers. The article must then also be formally withdrawn. Such a “retraction” seems to mean: Please forget completely that we ever published this.
The reviewers have less to blame themselves if other teams are later unable to replicate the published results despite repeated attempts. This often results in months or even years of arguing, with one team accusing the other of using improperly applied methods. If attempts at replication stubbornly fail, the doubters prevail and ultimately revocation becomes inevitable.
With the rapidly growing flood of professional articles, the frequency of retractions also increases – and thus it is worthwhile to systematically evaluate the retractions, instead of bashfully handing them over to oblivion, in order to learn from mistakes. Above all, each case should be accompanied by information about the reasons, says Quan-Hoang Vuong of Phenikaa University in Hanoi.
The Vietnamese sociologist of science has examined more than 2000 retractions to see if they provided information about their history.
In every other case, no mention was made of who had withdrawn the article – all the authors? A co-author? The editorial staff? Ten percent did not provide any reason at all (Nature 582, p. 149, 2020).
Vuong suggests that four data should be added to each retraction: Who initiated them? What was the reason (methodological error, plagiarism, deliberate fraud)? Did editors and authors agree on this? Did the non-reproducibility only become apparent afterwards?
Because, Vuong rightly says, retractions are not bad in themselves. They correct human error and strengthen the scientific process. One also becomes wise out of harm.


Subscribe to newsletter
To stay up to date and receive information about opportunities to support us, sign up for the newsletter here.

Is being processed …
Done! You are on the list.

Donation

Your donation helps us to make ReclaimTheFacts better known through paid advertising in social media.

5.00 CHF