Advertisement
Guest User

Untitled

a guest
Dec 30th, 2021
190
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 2.80 KB | None | 0 0
  1. First, let me clarify what I mean by "right" and "wrong". The lowest bar here is that the paper presents data whose general features would be reproducible if the experiment were repeated. A lot of papers fail here because they are underpowered, or have sloppy or inadequately described methods. On top of that, the results, at the level of analysis and conclusion, could be likely to be observed again and be generalizable to similar systems, not just a statistical fluke or feature highly specific to one system/method. "Exploratory" research done by people who poorly understand statistics and test many variables with no real hypothesis frequently fails here, and is IMO the most common failure mode. Best of all, a paper can deliver a new theory that predictively compresses many new, previous, and future observations into a more concise conceptual model which has (here it gets philosophical) some sort of robust connection or correspondence with reality. This outcome is extremely rare outside of detailed mechanistic studies. Everything on this spectrum is probabilistic.
  2.  
  3. My takeaways are as follows. First, the difficulty of being "right" (in my sense) and likelihood of being "wrong" are proportional to how broad the question under study is. You might notice that for complex genomics studies, there is not even a serious effort to pretend that any real insight was delivered beyond the impressively big dataset itself. PIs like to insulate themselves from the even the possibility of being wrong by creating vague and nonfalsifiable hypotheses and falling back on "I'm just presenting the data" as an excuse for this. The insight behind falsifiability, of course, is that if you cannot be wrong, you cannot be right either, which is a dichotomous way of putting a concept that has a probabilistic analog.
  4.  
  5. The best steelman for why science would fail to penalize wrong (irreproducible, nongeneralizable) results is that it wants people to present data rain or shine. I think the problem is that people who just have data and no serious novel interpretation or application for it go ahead and spin a yarn in terms of current dogma to make their work seem more important and plausible. This in turn makes reinforces current the dogma/paradigm, making everyone more confused and rewarding whatever genius senior PI came up with the current fad -- err, vague, nonpredictive, ad-hoc phlogiston-like pseudotheory. And the result is a lot of people speaking, thinking, and working in circles, producing ever more data with ever less meaning. Clinicians and the public end up even more deluded too because they have no idea how these shenanigans work. People outside a field but who might want to use its insights have no idea what's BS.
  6.  
  7. I think the field outside of science that has the most similar set of incentives and pathologies is journalism.
  8.  
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement