What you describe in these general terms you are choosing without being specific at all doesn’t sound like talking about science and real research. It sounds as if you are talking about some blogs or youtube.
Please, tell us, of what science are you talking, and show some public papers you know of.
It’s just not true, not for life science, not for physics, not for math, not for health, not for chemistry, not for geo sciences.
I am not saying that there aren’t problems or black sheeps in any of these and other fields. But saying it is regular, like you implying, is just wrong. It’s the oposite, it is very rare!
No, it is not. Only in bad and low reputation papers, and still it’s far away from rubish and these have in general quite low impact.
Perhaps you talk about just putting something into public and not about publishing a research paper. Topic is about open source in research!
No, they don’t. They do it typically in their own field of research with a purpose: Keeping quality high in their own field and in the end to raise more money for their field of research, because high quality and good research results in more research funds. Just because they don’t get money doesn’t mean it’s free and not for a good purpose for themselfs.
Sounds as if this is the reason for problems. If so, tell us some better way of publishing research results. Of course it is like that, but you say
but it was never something else. Yes, people make errors, and errors are published. It was always like that. Researchers are learning from errors too. Sometimes the errors make papers worthless, then they are retracted, but sometimes the errors are of some worth too, so the papers stay in public but are corrected. All this is normal and good as it is.
I have published a paper with, not an error, but with some information missing, the missing detail wasn’t found by peer review and it is still in public. People who know the paper and who are building up on it know about it, because they contacted the author (I am not author number 1) and in any case, we could provide the missing key and those people were able to reproduce the outcome. Without it you can’t reproduce it:
Gronniger E., Weber B., Heil O, Peters N., Stab F., Wenck H., Korn B., Winnefeld M., Lyko F., (2010) Aging and chronic sun exposure cause distinct epigenetic changes in human skin. PLoS Genet, **6:**e1000971
link to online source
Of course, this is bad and unlucky, but errors happens and it’s not different in research.
@WuSiren perhaps this answers your questions, feel free to ask more.
Another example for you, regarding open source. For my first research topic I used the SNNS (SNNS - Wikipedia). I had some problems and found out it was a bug in the simulator. I repaired the bug and emailed it to the developers. It was 1995, so no github or similar these days. An example were open source was improved by research using it. Why is pythons pyTorch and other packages todays backbone of all the chatGPT KIs? Because they are improved and enhanced through their usage in neural network research by researchers.
As a researcher you need tools, but you don’t rely on the tools you use. The opposite is true: you doubt every tool until you are sure it’s good for you!
Last word: I tried to give specific proofs of my claims. It may be not perfect but I also need to be a bit short here (already wall of text). Others stay in some general accusation and don’t provide specific proofs. We are talking about research and this is the way research is done: verifiable, reproducible, public available, specific (not vague) and more.
There is nothing wrong in closed software (as people need income for living) and nothing wrong with open source. Good research doesn’t rely on the tool without cross checking.
This “science nowadays…” bullshit is just wrong.
Edit: wrong paper, now the paper with the missing information.