Arguing about sloppy science

Can’t confirm this. Not in our scientific area. Typically, if you publish something where the statistical process isn’t public and can’t be reproduced, I doubt it is passing peer review. Actually I never have seen a publication without that. Even the data itself must be published (e.g. in NCBI GEO or ArrayExpress). It’s not always the code, which needs to be public, well, always publishing the code of your ANOVA would be a bit tedious, though that’s not necessary. But the process itself must be reproducible and it is reproduced a lot by people.

Your general statement must be better specified of which science you are talking, or it’s just some wrong statement intended to spread FUD again, which IS nowadays a thing.

1 Like

Any field that involves code, since these codes are not published in a public repo to be checked will be prone to errors.

People reproduce results after, so it’s already published. And if some calculations are wrong, nowadays they will just fade away in the sea of “publish publish publish”, so no harm at all.

I know of instances that people have not checked stuff, written in the papers that they did and nobody can check this. It takes a huge effort to reproduce results that nobody is doing. Of course if it’s about a very important topic, people will do it, but one can easily publish rubish these days because there is no way to peer review the process of getting to the result.

Scientists themselves are doing the peer review for free and that results in spending less time checking everything thoroughly. Even worse when the tools are not public.

I can recall an instance where a simple calculation was wrongly coded up a community code which is popular. Numerous papers were done with this code.

my point is that today’s peer review is unable to catch these errors (don’t confuse this with people reproducing stuff later)

3 Likes

How does the academic community address these situations? Will these erroneous papers be retracted? Or will this be seen as a inadvertent detour in the scientific research process of human’s history and be forgiven? If these erroneous papers involve some additional benefits, such as academic degrees, professional titles, profits, etc., how will they be handled?

What you describe in these general terms you are choosing without being specific at all doesn’t sound like talking about science and real research. It sounds as if you are talking about some blogs or youtube.
Please, tell us, of what science are you talking, and show some public papers you know of.
It’s just not true, not for life science, not for physics, not for math, not for health, not for chemistry, not for geo sciences.
I am not saying that there aren’t problems or black sheeps in any of these and other fields. But saying it is regular, like you implying, is just wrong. It’s the oposite, it is very rare!

No, it is not. Only in bad and low reputation papers, and still it’s far away from rubish and these have in general quite low impact.

Perhaps you talk about just putting something into public and not about publishing a research paper. Topic is about open source in research!

No, they don’t. They do it typically in their own field of research with a purpose: Keeping quality high in their own field and in the end to raise more money for their field of research, because high quality and good research results in more research funds. Just because they don’t get money doesn’t mean it’s free and not for a good purpose for themselfs.

Sounds as if this is the reason for problems. If so, tell us some better way of publishing research results. Of course it is like that, but you say

but it was never something else. Yes, people make errors, and errors are published. It was always like that. Researchers are learning from errors too. Sometimes the errors make papers worthless, then they are retracted, but sometimes the errors are of some worth too, so the papers stay in public but are corrected. All this is normal and good as it is.

I have published a paper with, not an error, but with some information missing, the missing detail wasn’t found by peer review and it is still in public. People who know the paper and who are building up on it know about it, because they contacted the author (I am not author number 1) and in any case, we could provide the missing key and those people were able to reproduce the outcome. Without it you can’t reproduce it:
Gronniger E., Weber B., Heil O, Peters N., Stab F., Wenck H., Korn B., Winnefeld M., Lyko F., (2010) Aging and chronic sun exposure cause distinct epigenetic changes in human skin. PLoS Genet, **6:**e1000971
link to online source
Of course, this is bad and unlucky, but errors happens and it’s not different in research.

@WuSiren perhaps this answers your questions, feel free to ask more.

Another example for you, regarding open source. For my first research topic I used the SNNS (SNNS - Wikipedia). I had some problems and found out it was a bug in the simulator. I repaired the bug and emailed it to the developers. It was 1995, so no github or similar these days. An example were open source was improved by research using it. Why is pythons pyTorch and other packages todays backbone of all the chatGPT KIs? Because they are improved and enhanced through their usage in neural network research by researchers.

As a researcher you need tools, but you don’t rely on the tools you use. The opposite is true: you doubt every tool until you are sure it’s good for you!

Last word: I tried to give specific proofs of my claims. It may be not perfect but I also need to be a bit short here (already wall of text). Others stay in some general accusation and don’t provide specific proofs. We are talking about research and this is the way research is done: verifiable, reproducible, public available, specific (not vague) and more.

There is nothing wrong in closed software (as people need income for living) and nothing wrong with open source. Good research doesn’t rely on the tool without cross checking.

This “science nowadays…” bullshit is just wrong.

Edit: wrong paper, now the paper with the missing information.

3 Likes

the devs said thank you when we pointed out the error. nothing was done for the papers that used it. not even an announcement for their newer release. it was just listed as a bug fix. Nobody will know anyways…so…

2 Likes

1st. when people talk about a system, we generalize to an abstract idea, such as peer review and science for example. Its not “unspecific”. Would you say to an epistimologist that he/she is “unspecific”?

2nd.
I reported some of my experience in physics, more spefically astrophysics/general relativity. Is this specific enough for your taste?

No, I don’t think so. But I withdraw because I could not, in any case, check if anything is wrong with some papers :wink: How many people are out there who could peer review in depth? Probably not a hundred. Reproducing those calculations/simulations is probably nearly impossible by nature. My guess is, this research topic is more error prone (to calculations) then many others. But still I don’t believe this field of research does have more problems in general than others, in any case, it’s not very dangerous to calculate the amount of dark matter wrongly.

1 Like

Admirable. I benefited a lot.

Deviating from what already is an offtopic, sorry

Yes, they mostly do it for free. The influence of doing peer review onto financial reward is too remote and indirect to be counted. Specifically for the few peer reviews I’ve done I can claim for sure they couldn’t have any influence onto my salary, even indirect. And yes, doing peer review properly is quite time-consuming.

In experimental sciences, a peer review can at best discover inconsistencies in the results interpretation. For a paper reporting some run-of-the-mill result it is pretty safe to falsify measurements to save time and effort: In the best case you guessed it correctly, and in the worst case somebody will report a different result and cite you, so +1 to your citations statistics. Not every scientist I knew was a person of high integrity, so I am pretty sure the dark figure of doctored results is not unsubstantial. But then, it’s just your feeling against my feeling.

2 Likes

nobody said this.

We are part of a scientific system and one part is peer review. Our publications are peer reviewed by somebody, thus improving quality. And so do we. It’s part of the job. Therefor it’s not for free. It directly influences quality of research and with that the amount of grants which flow into this field and not for someones salary. Of course other parameters are more important for grants than peer review. Anyways, it’s part of professional researchers life, for what we get paid, so not free.

It’s not for free as some of my contributions to Julia packages. Not part of my professional work so it’s for free, except the fun I got from it.

Peer review is not about filtering out the false from the truth. That’s not possible. It’s about quality: Is everything worked out to standards of the field. It still can be wrong, but this is often for the future to decide.

These are typically found not because of the deep insight some peer reviewer has so he could decide if it’s fraud or not. No, everybody can find this e.g. a typical error those forgers do is reusing pictures from other older publications in a totally different context. Actually I think this is the dumbest thing you can do as a researcher. It will surely come to surface if the research has some importance. And if the work is of no importance, than why faking it. The sheer number of publications is one of the lesser important things a researcher can produce. Those people ruin their life for nothing.

1 Like

I can only say the dark figure by definition concerns only those which are “typically” never found.

I could tell one concrete story I got from an insider (and I knew personally all participants), but that’s off topic here, and I expect you just to dismiss it as you dismissed svretina.

1 Like

I dismissed nobody. I argue against some implications these general statements imply and which are in my opinion just wrong. And I try to do this in a non-general explicit way.

And it’s not off topic: “Why is it reliable to use open source packages for research?”

I say, it’s not. And it’s not reliable to use closed source software. It’s the same for all tools: don’t trust them but validate them. This improves the software and your research. The people in the lab don’t rely on their pipettes, no, they calibrate them regularly and validate them regularly because proper concentrations are important. Is this something new and surprising? No, it’s just standard.

It’s not the tools fault, it’s the researchers fault to blindly rely on them.

Does anybody believe that python and python packages don’t have errors? It’s much more used so there are much more wrong publications out there? I don’t know but it is not a general problem of science nowadays, it was always the researchers problem if he/she relied blindly on his tools.

By the way: this is the outstanding strength of Julia and it’s meant when it’s said Julia solves the two language problem. Julia is so easy to verify, even by checking the code. Try it with a R package and you praise Julia again.

3 Likes

Apparently we have different idea of of the meaning of word “dismissing”. OK, English is not my first language.

1 Like

Yes, using strong words is wrong. Sorry for this and apologies if this dismissed @svretina and you.

You do believe that? My experiences (as a person in the lab at respectable institutions): Some devices get regular calibration, most do not. You just do not have the budget and time to buy calibrated devices only and keep them re-calibrated regularly. It can also be quite laborious if a device is built into an experimental setup.

BTW pipettes the people around me used (and I sometimes, too) were never re-calibrated to my knowledge.

That can be different in strongly regulated fields in some countries.

1 Like

ah I just remembered, in a climate institute in germany ( which I will not name obv ), they changed their data measurements so that they fit their model. That could not be possibly checked by peer review. Same technique can be applied to simulations.

I know that. I am talking about our lab.

Really, it now gets somehow weird:

What are you two trying to say? I would suggest:

  1. state your statement (perhaps try to be on topic)
  2. give some proof or similar which at least substantiate it

The topic is NOT the bad state of science in Germany. For now this is my last answer to you, until you come back to reasonable arguing. I am not sorry for dismissing you two now in this answer.

1 Like

Obviously scientific misconduct exists, at various levels, but stating that it is the norm is false, and per review is one of the validation steps that a scientific discovery has to pass before being accepted, and certainly not the most important one. Important fraudulent claims are frequently debunked quite quickly, frequently leading to the end of the career of the scientist involved. People who falsify irrelevant results are harder to get.

In any case, the scientific process, starting with publication and per review, is still one of the most transparent ways to expose and validate results. Closed-sources (software, data, or otherwise), are much harder to validate in any possible sense.

2 Likes

I’d guess somewhat sloppy science (at different levels of sloppiness) is so common it can be called “typical” - I wouldn’t call it “the norm” as I don’t think that is the desirable state.

My uneducated guess some blatant misconduct like data fabrication could refer to something like 0.1…1% of publications in journals listed in Science Citation Index. Is that much? Is that low?

Some of the problems age-old, some are newer: increasing competition for money, “publish or perish”, and, nowadays, political pressures.

Sorry, the problem was again my deficient English. Obviously you used the definite article, meaning just this one specific lab, not a typical scientific lab, as I wrongly assumed.