Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

These sorts of articles raise so many thoughts and emotions in me. I was trained as a computational biologist with a little lab work and ran gels from time to time. Personally, I hated gels- they're finicky, messy, ugly, and don't really tell you very much. But molecular biology as a field runs on gels- it's the priimary source of results for almost everything in molbio. I have seen more talks and papers that rested entirely a single image of a gel which is really just some dark bands.

At the same time, I was a failed scientist: my gels weren't as interesting, or convincing compared to the ones done by the folks who went on to be more successful. At the time (20+ years ago) it didn't occur to me that anybody would intentionally modify images of gels to promote the results they claimed, although I did assume that folks didn't do a good job of organizing their data, and occasionally published papers that were wrong simply because they confused two images.

Would I have been more successful if fewer people (and I now believe this is a common occurrence) published fraudulent images of gels? Maybe, maybe not. But the more important thing is that everybody just went along with this. I participated in many journal clubs where folks would just flip to Figure 3, assume the gel was what the authors claimed, and proceed to agree with (or disagree with) the results and conclusions uncritically. Whereas I would spend a lot of time trying to understand what experiment was actually run, and what th e data showed.



Similar - when I was younger, I would never have suspected that a scientist was committing fraud.

As I've gotten older, I understand that Charlie Munger's observation "“Show me the incentive and I will show you the outcome.” is applicable everywhere - including science.

Academic scientists' careers are driven by publishing, citations and impact. Arguably some have figured how to game the system to advance their careers. Science be damned.


I think my favorite Simpsons gag is the episode where Lisa enlists a scientist (voiced by Stephen Jay Gould) to run tests to debunk some angel bones that were found at a construction site.

In the middle of the episode, the scientist bicycles up to report, dramatically, that the tests "were inconclusive".

In the end, it's revealed that the bones were a fraud concocted by some mall developers to promote their new mall.

After this is revealed, Lisa asks the scientist about the tests. He shrugs:

"I'm not going to lie to you, Lisa. I never ran the tests."

It's funny on a few levels but what I find most amusing is that his incentive is left a mystery.


Well, the incentive is that he didn't want to run the tests out of laziness (i.e. he lacked an incentive to run them). He ran to Lisa to give his anticlimactic report not to be deceptive, but rather he just happened to be cycling through that part of town and just needed to use the bathroom really badly.


The writers of these episodes were really on another level considering it was a cartoon.

Lisa's first word is still a personal favourite of mine, especially now as a father.


To be honest, it's difficult to tell if the subplot makes sense on purpose, or if the writers just wanted to make a joke and it just happened to end up making sense. I don't think I had ever put the three scenes together before now.


One of the first things I learned in film school is _nothing_ in a production at that level is coincidence or serendipity. To get to the final script and storyboard, the writers would have gone through multiple drafts, and a great deal of material gets either cut, or retooled to reinforce thematic elements. To the extent that The Simpsons was a goofy cartoon, its writers’ room carried a great deal of intellectual and academic heft, and I don’t doubt for a moment that there was full intention with both the joke itself, and the choice to leave the character’s motivations ambiguous.


> One of the first things I learned in film school is _nothing_ in a production at that level is coincidence or serendipity.

Perhaps they should have taught you to be less sure of that. So many takes in movies that ended up being the best one are where a punch accidentally did land, something is ad-libbed, a dialogue is mixed up, etc.

To take an example of a very critically acclaimed show: in Breaking Bad the only reason we got Jonathan Banks in the role of Mike is because Bob Odenkirk had a scheduling conflict, and Banks improvised a slap during his audition. Paul Aaron even complained about it indicating that he would not have agreed to it.


It seems like there is a lot of serendipitous in writing and production. That's not what it was about. The point is how much agonizing and second guessing it takes and how many alternatives explored and how many takes, etc before something, anything makes it in the final product.

The lucky break is first a result of a lot of planning and work - and it gets analyzed to death before included - and then probably re-inforced here or there elsewhere. (So that for me, I do notice when I hear movie or TV dialog as completely natural and said exactly right. It's exceptional.)


This is a cartoon though, significantly less adlibbing, everything has already been storyboarded and scripted out etc.

Pixar's approach to making their movies is a fascinating highly iterative process going through many story boards and internal showings using simplistic graphics before proceeding to the final stage to produce a polished product. I wonder how Simpsons do it.


> One of the first things I learned in film school is _nothing_ in a production at that level is coincidence or serendipity. To get to the final script and storyboard, the writers would have gone through multiple drafts, and a great deal of material gets either cut, or retooled to reinforce thematic elements. To the extent that The Simpsons was a goofy cartoon, its writers’ room carried a great deal of intellectual and academic heft, and I don’t doubt for a moment that there was full intention with both the joke itself, and the choice to leave the character’s motivations ambiguous.

Not everything, for example I read somewhere that chess "fight" in Tween Peaks was random and didn't adhere to chess rules because no one really paid attention to record or follow moves.


Yes TV shows especially, they are under a lot of pressure to put them out on time so stuff isn’t always thought out fully.


Goofy cartoon but I always thought it was very cleverly done in parts. The laugh followed by "fuck life is actually like that" aftertaste.


The entire writing room was Harvard grads and people who went on to accomplish impressive things in the industry (eg Conan O’Brien was a writer, David X Cohen was a writer and then went on to cocreate Futurama with Groening). The early writing team was one of the sharpest ever assembled and dismissing it as a “goofy cartoon” is missing the talent behind it just like if you dismissed Futurama in that way.


What's her first word?



Apparently it was "Bart". I had to look it up because I was curious as well.


I guess GP is referring to the episode, rather than the actual word . . .


More incentive to watch the 20min episode if you ever get the opportunity haha


I thought his incentive was to defend the idea of miracles/faith/angels/God.


More often than not in scientific fraud I've seen the underlying motives be personal beliefs than financial. This is why science needs to be much stronger in weeding out the charlatans.


[citation needed]

---

I conjecture the most common underlying motive is to embellish cv, and climb the academic ladder.


It's actually quite clever from the part of the scientist.

The incentive would be money, maybe the pay for doing this test was not good enough.

Or maybe the scientist was motivated by thirst of discovering something good for humanity like cure for cancer and didn't want to get distracted by other things. Funding is also needed but angel bones are clearly impossibility. Why even spend time on disproving that? But if she had engaged in discussion with people clearly believing in this nonsense it would have taken too much time. Saying, the tests are inconclusive lets her be distanced from all this and allow people to leave her alone, mostly that the groups will continue their disputes among themselves.


That's a good one. In my experience, corruption is almost always disguised as neglect and incompetence. Corrupt people meticulously cover their tracks by coming up with excuses to show neglect; some of them only accept bribes that they can explain away as neglect where they have plausible deniability. It doesn't take much brainpower to do well, just malicious intent and knowing the upper limits.

IMO, Hanlon's razor "Never attribute to malice that which can be adequately explained by stupidity" is a narrative which was created to condition the masses into accepting being conned repeatedly.

On the topic, I subscribe to Grey's law "Any sufficiently advanced incompetence is indistinguishable from malice" so I see idiots as malicious. In the very best case, idiots in positions of power are malicious for accepting the position and thus preventing someone more competent from getting it. It really doesn't matter what their intent is. Deep down, stupid people know that they're stupid but they let their emotions get in the way, same emotions which prevent them from getting smarter.


Barry Apppelman, for a long time the boss of all the Unix engineers, said malice was preferable to incompetence because malice would take breaks.


However malice is directed. When it doesn’t take breaks it does a lot more damage usually.

One can argue malice can be controlled with incentives at some level, though.


So can "stupidity". If something is possible for a human to do, it's something that's possible for any sufficiently-enabled/supported human to do. I've heard it put that the inability to understand or do something is a matter of not having acquired the necessary prerequisites. So, the incentives to control stupidity are the incentives to acquire and apply the prerequisite skills or knowledge.


Yes and in addition malice is enough times predictable while incompetence is just a quantum void where the probabilities are inverted and your hard earned intuition doesn't help you...


I don't seem to be able to edit this anymore, but there is a grievous gap in the writing: "Barry Appelman, for a long time the boss of all the Unix engineers at AOL."


Hmm, sure, but if you want to spot malice, look for the one not taking breaks.


I wouldn't attribute malice to Hanlon's razor, but yes, even dogs and small children know how to play dumb and the children just keep getting better at it.


True story: CEOs, cops,and politicians (and their appointees) are good at it as well.


Ehh... I think neglect and incompetence are super common. I have a sink full of dishes downstairs to prove it. I think corruption, while not rare, is still far rarer. Horses over zebras still (at least in the US).


‘Sufficiently advanced’ is the key term, e.g. if your sink was located on the premises of 5 star hotel then that would probably be indistinguishable from malice.


> On the topic, I subscribe to Grey's law "Any sufficiently advanced incompetence is indistinguishable from malice" so I see idiots as malicious. In the very best case, idiots in positions of power are malicious for accepting the position and thus preventing someone more competent from getting it. It really doesn't matter what their intent is. Deep down, stupid people know that they're stupid but they let their emotions get in the way, same emotions which prevent them from getting smarter.

I think you have things backwards. Being dumb is the default. It takes ability and effort and help to get smarter. Animals and children are dumber than us. Do you think they realize it?

Perversely many who are dumb are trapped thinking they are not dumb:

https://en.m.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effec...

A dumb person (like a dumb child or animal) are what they are one should not attribute malice. Better to try to see things from their point of view and perhaps help them be smarter. This is what I try to do.

Your other remarks are 100% just the point above was sticking out hence my comment.


Yes this resonates.

I feel that stupidity is evil in the same way as that a shark might be perceived as evil. You could explain it away as "It's not their fault, it's in their nature, they don't know better" but if it's in their nature to cause people harm, if anything, it makes the label more applicable from my perspective.


While that may be a kind view, practically it is rarely a useful one. At least for the person holding it.

Especially when power, violence, money, or sex are involved.


Dunning Kruger effect is a cognitive bias in which people with limited competence in a particular domain overestimate their abilities.

That is to say some of the incompetent are so incompetent they can’t distinguish between their incompetence and an actual expert. This is exhibited very publicly in some contestants of the American Idol genre of shows.

https://en.m.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effec...


D&K ironically misengineered their tests and inadventently misconstrued their data due to floor and ceiling effects. If you ran the gamut of their tests against random noise you get similar results.

https://www.mcgill.ca/oss/article/critical-thinking/dunning-...

I posit that anyone who uses DK unironically is actually committing to the DK-paradox, something I'll leave you to define for yourself.


Reminds me of this quote:

> "The most erroneous stories are those we think we know best -and therefore never scrutinize or question."

-Stephen Jay Gould


“It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so. “

– Mark Twain


Which he never said, making the not-quote doubly accurate


"Don't believe everything you read on the Internet."

- Abraham Lincoln

Maybe relevant: https://quoteinvestigator.com/2018/11/18/know-trouble/


I think he didn't want to run tests or present results that might be contrary to the mob's dogma, for fear of retribution.


Or it was merely useful excuse for the narrative about flawed humans and anti science vs science arc


If humanity is to mature, we should be an open book when it comes to incentives and build a world purposefully with all incentives aligned to the outcomes we collectively agree upon.

https://fs.blog/great-talks/psychology-human-misjudgment/

Charlie Munger's Misjudgment #1: Incentive-caused Bias https://www.youtube.com/watch?v=h-2yIO8cnvw

https://fs.blog/bias-incentives-reinforcement/


(I keep mentioning this but no one seems to be picking up on it.) There is an algorithm that was developed in the late 80's in the context of therapy that could be used to align incentives and collectively agree on outcomes.

The algorithm is a simple recursive procedure where the guide or therapist evokes the client's motivation or incentive for an initial behaviour and then for each motivation in turn until a (so-called) "core state" is reached. In crude pseudo-code it would be something like:

    FOO = <some "presenting problem">
    M = evoke_motivation(FOO)
    until is_core_state(M):
        M = evoke_motivation(M)
Generalizing, motivations form a DAG that bottoms out in a handful of deep and profound spiritual "states". These states seem to be universal. Walking the DAGs of two or more people simultaneously until both are in core states effectively aligns incentives automatically, at least that's what I suspect would happen.

(I have no affiliation with these folks: Core Transformation Process https://www.coretransformation.org/ )


NLP and related stuff is not taken all that seriously among main stream scientists, you should probably say.


That's definitely true and there's lots of craziness around it. However, the best estimates for therapy and it's effects suggest that it's mostly a provider effect rather than anything in the theory.

Which is to say, a lot of this stuff works because you expect it to.


In re: this "NLP is pseudoscience" business, I've lost patience with it. First, I'm living proof of NLP's efficacy. Second, I don't go around suggesting homeopathy or astrology or pyramid power, okay? Like Ron Swanson "my recommendation is essentially a guarantee."

In terms of a Venn diagram the region representing people who have experience with NLP and the region representing people who think NLP is pseudoscience are disjoint, they do not overlap. As in I have never found anyone who claims that NLP is pseudoscience who has also admitted to having any experience with it. That is not science, eh? To the extent that mainstream scientists don't take NLP seriously they make themselves ridiculous. So yeah, in this one instance, ignore the scientists and look at the strange thing anyway, please? Humor me?

Now NLP is not scientific (yet) and it doesn't pretend to be (although many promoters do talk that way, and that's wrong and they shouldn't do that) and in fact there's a video online (I'll link to if I find it) where the co-founder addresses this point and says "it's not scientific".

However it does work. So it seems imperative to do science to it!?

At the time it was developed there were dozens of schools of psychology on the one hand[1] and academic psychologists on the other and the two groups did not talk to each other. NLP ran afoul of the academic psychologists in the mid 1980's and they closed ranks against it and haven't bothered themselves with it since. Again, I think it would be fantastic if we would do science to it and figure out what these algorithms are actually doing.

In any event the important thing is that the tools and techniques that have been developed are rigorous and repeatable. E.g. this "Core Transformation Process" works. That's primary data on which the science of psychology should operate not ignore.

[1] E.g. https://en.wikipedia.org/wiki/Esalen_Institute | https://en.wikipedia.org/wiki/Human_Potential_Movement | https://en.wikipedia.org/wiki/Humanistic_psychology


I don't think that's really going to work. People won't list all their incentives, because some of them are implicit and others are embarrassing or "creepy". Others will absolutely judge you for what incentivizes your actions, therefore hiding them is the status quo.

If you say that your incentive for working out is to look good and be popular with the ladies then people will judge you for it, even if it's exactly the truth. If you say that you work out "for health" everyone will applaud you for what you're doing. And yet the outcome is going to be the same.


I could be wrong, but I took the parent's comment to mean that we should design incentive structures transparently, instead of obscuring them or outright ignoring the whole concept when engineering society.


You got it backwards. It's not about being transparent about what you want to achieve, it's about being transparent about what others expect you to achieve on your current position.


To check my understanding: say your current position is "unemployed." You would think that the expectation for you is to "get a job", but to get a job is extremely difficult. You have to navigate an almost adversarial job market and recruiting process, often for months. It's essentially a massive negative incentive, considering all of the effort and grief involved. So, the incentives aren't aligned with the desired outcome; the skittishness of each individual hiring company to make sure that they don't get screwed by a bad hire has warped the entire dynamic. Is this a good example?


At this point I'm so used to this that I mentally translate "for health" into "to look good for the opposite sex".


It’s funny how those incentives are so well aligned.


That's reductive, some of use are doing it to look good for the same sex!


>outcomes we collectively agree upon.

lol, what are the chances?

The average Joe is interested in Dem vs Rep or what the latest show on Netflix.

The average researcher is worried about his livelihood, tenure etc.


We do achieve some things, though usually not by spending time pondering questions like,

> lol, what are the chances?


This is a weird take, assuming the average researcher cannot be an average Joe, and also that average people aren't also worried about their livelihood...you might want to revisit your view of the world.


No, it's not. A researcher might an average Joe, but that doesn't mean that the average researcher is the same as the average Joe.


The initial comment makes the 2 mutually exclusive. You reframing it doesn't change what the original comment said. You also blew past the more important of the 2 points: that regular people care about their livelihoods as well.


“Average joe”. Humanity evolves slowly over long periods. The average joe today is far more educated than one from 200 years ago.


Far more educated. But certainly not smarter.

And it's not at all clear if that education does anything other than magnify their intellectual predispositions. The smart people can make great strides, but the stupid will be stupid louder and harder. And the average may well just be... more average.


IQ levels has risen on average because of nutrients, not using lead and education.

An increase in blood lead from 10 to 20 micrograms/dl was associated with a decrease of 2.6 IQ


IQ score rises (Flynn effect) are most likely spurious and do not reflect any increase in actual on-the-spot problem-solving ability.

IQ scores were never intended to be used to compare across cohorts. To do so is invalid.


…and to still believe that IQ levels measure something meaningful is pretty average Joe anyway. Could be the rise of videogames for all we know.


Risen by a few points. Statistically significant but not really meaningful.


> Q levels has risen on average because of...

...nobody really knows why


Average Joe does not care about politics…


If one only cares about democrats vs dems (or left vs "far right" in Europe) one also doesn't REALLY care about politics.

Caring = caring to understand how the system works and how the incentives work for participants in it.

If I'm super generous I would guess maybe 0,1 percent of the population cares by that definition.


I think you're a little too harsh in that judgement. I'd say it's at least 1%. Possibly even 5%.

It still means that they are massively, massively outweighed by loud tribalism.


There's a problem that if you care, as an average person, it's hard to do much with it. Every few years you can vote left or right, which unless you happen to live in a marginal constituency or swing state, has no effect.


>Every few years you can vote left or right,

If you're talking about the US, you can vote center-right (Democratic) or far right (Republican). There is no viable left wing party in the US.


From whose perspective and what are we considering right and left? The Democratic party is left of center on social issues, even compared to Europe.


>he Democratic party is left of center on social issues, even compared to Europe.

Actually, the Democratic Party is mostly libertarian (or classically liberal, if you like, which is, inherently right wing) on social issues -- preferring to allow people to make their own choices WRT their bodies rather than seeking government control of reproductive health and other forms of bodily autonomy.

Individual rights and personal agency are not "left wing," except in the eyes of the authoritarian far right (or far left) who seek control over all else.

So no. The Democratic Party has a solidly center-right agenda/ideology -- no collectivism, individual rights not curtailed by the state, freedom of thought and religion, etc.

Despite what some folks may say, there are no Marxists in the US Democratic Party.

That's not to say that the Democratic Party is the ideal. Far from it. But to place them on the absolute "left" is ridiculous on its face.

It's only "left wing" as compared with the far right (read: evangelical christians, white nationalists, xenophobes, etc.) Republican Party who want to limit women's reproductive choices, force the religious doctrines of the Christian church down everyone's throats and spout xenophobic and long debunked genetic tropes related to melanin content.


These posts are tiresome. They all boil down to "my view should be the middle".

You could just as well claim the Democrats are far left and the republicans center left.


The political "spectrum" is not a range of subjective opinions, it's a range of objectively documented ideas.

I don't know how well they can fit in a unidimensional scale though.


This is often done by contrasting the US with Europe, as if Europe is a political gold standard.


>These posts are tiresome. They all boil down to "my view should be the middle".

I don't claim that my views are, or should be, "the middle".

In fact, I didn't share my views at all.

Rather, I contrasted the US Republican Party with the US Democratic Party through the lens of the political spectrum.

Perhaps you think your views are "middle-of-the-road" and maybe they are. I have no idea what you think or believe.

But making the claim you did added absolutely nothing to the discussion, nor did it address anything I wrote. And more's the pity.


This can't be said often enough. We have two right wing parties in the US. That's it.


If humanity is to mature, we must be critical and take responsibility for ourselves, particularly when the alignment of others are concerned. Such as starting by disagreeing with everything, and validate for one's own.


Sounds like regulation to me ;)

I totally agree with you


What something bottom up, transparent in rules and reward structure and able to iterate quickly instead of something top down?

Also agree with the GP.


> with all incentives aligned to the outcomes we collectively agree upon

Some things are simply not possible.

> If humanity is to mature

Recognizing and working within the constraints is maturity


> Arguably some have figured how to game the system to advance their careers

lol arguably? i would bet my generous, non-academia, industry salary for the next 10 years, that there's not a single academic with a citation count over say ... 50k (ostensibly the most successful academic) that isn't gaming the system.

- signed someone who got their phd at "prestigous" uni under a guy with >100k citations


Terence Tao has well over 50K citations. Maybe one can argue that he’s gaming the system because he alone can decide what problems are deemed to be interesting by the broader community, but he can’t help that.


In this case, it might matter that mathematics aren't strictly speaking, a science.


Citation counts are not very useful. They vary too much between fields and topics. I've seen people finish PhDs with ~20k citations, because they got involved in highly visible projects. And even if they stopped publishing new papers, they would have reached 50k citations in another 5-10 years purely on inertia.


There clearly are single examples here and you would lose this bet.


Username checks out.


I would take that bet. I postdoc'd with a very accomplished researcher (he won multiple Nobel Prize "precursor" awards). He certainly understood the social aspect of science - you need to be able to convince people your theories are correct and important - but I wouldn't say he "gamed the system" in any significant way.


> Academic scientists' careers are driven by publishing, citations and impact. Arguably some have figured how to game the system to advance their careers. Science be damned.

I’ve talked to multiple professors about this and I think it’s not because they don’t care about science. They just care more about their career. And I don’t blame them. It’s a slippery slope and once you notice other people who start beating you, then it’s very hard to stay on the righteous pad[note]. Heck I even myself in the PhD have written things I don’t agree with. But at some point you have to pick your battles. You cannot fight every point.

In the end I also don’t think they care that much about science. Political parties often push certain ideas more or less depending on their beliefs. And scientist know this since they will often write their own ideas such that it sounds like it solves a problem for the government. If you think about it, it’s kind of a miracle that sometimes something good is produced from all this mess. There is some beauty to that.

[note] I’m not talking about blatant fraud here but about the smaller things like accepting comments from a reviewer which you know are incorrect, or using a methodology that is the status quo but you know is highly problematic.


> They just care more about their career.

It's not even that.

In most fields, you need lab space and equipment and grad students to get stuff done. And to pay for that, you need funding. And to get funding, you need to publish and apply for grants.

You have to pay attention to the "career" side of things -- otherwise, you won't get to do the science at all.


That’s exactly the rationale the corrupt senator uses to justify his actions in Mr. Smith Goes to Washington.


Used in the movie Wall Street to justify insider trading too:

Bud: Lou, I got a sure thing. Anacott Steel.

Lou: No such thing, except death and taxes. Not a good company any more. No fundamentals. What's goin' on, Bud? You know something? Remember, there are no short cuts, son. Quick-buck artists come and go with every bull market. The steady players make it through the bear markets. You're a part of something here, Bud. The money you make for people creates science and research jobs. Don't sell that out.

Bud: You're right, but you gotta get to the big time first, then you can do good things.

Lou: You can't get a little bit pregnant, son.

Bud: Lou, trust me. lt's a winner. Buy it.


And yet it rings true. Sure, if everyone was a Mr. Smith it would be fine. But with poor incentive structures, people are collectively going to do what they have to do.

The same applies to police who aren't actively corrupt, but don't combat the corruption around them. The ones who do fight corruption end up fired 'with cause' at best, or forcibly committed to mental institution or killed at worst.


I have also talked to many researchers and professors about this, but to me it just seems to come down to survivorship bias. We all observe a lot of scientists who care a lot about their careers because the ones who don't focus as much on advancing their careers are no longer around. I've commented to many people at my own place of work about how I dislike the up-or-out nature of current research practices, because it incentives people to play it safe and focus on incremental ideas that get results by the end of the current funding cycle. But as other people are saying here, we need to change the incentives to reduce these kinds of behaviors. Form follows finances.


Citizen science may be a direct response to these circumstances. When you think about it, it's basically just people who love science but _don't_ depend on it for income.


The Manhattan project was a government project that was run like a startup.

If such a project happened today, academic scientists would be trying to figure out ways to bend their existing research to match the grants. Then it would take another 30 years before people started to ask why nothing has been delivered yet.


Run like a startup in what respect?

It was a massive government-directed military project in wartime that was able to recruit all the top theoretical physicists at the time around a common aim in an urgent technological arms race to build the bomb. It included a vast effort of army engineers to build the facilities to process the fuel and so on. I'm not seeing the parallels with startups.


I just want to echo this!

They built a gosh darned city... oh wait, I was wrong, they built three. It was run like an extremely high-value military project, which is exactly what it was. Sure it was more theoretical than other military projects(at the time), but that is the game sometimes.

I get the sense that some folks just think "faster then we would do it now" is the same as startup. Which, to put it politely, I strongly disagree with. Startups are great and I am grateful for the daily value adds to my life, but pretending everything "fast" has startup mentality is just missing the mark.


In this context I'm not sure what "startup" is supposed to mean; but companies are fully capable of building up new cities if it makes economic sense. See https://en.wikipedia.org/wiki/Company_town. Disney was famously looking to run his own nuclear reactor so it is easy to imagine a private company doing fundamental nuclear research even if it isn't part of the nuclear industry.


Even your description sounds like a startup, to me.

There was a hook to get the funding (easy to get weapons funding in wartime). Recruiting the top talent. Urgency (beat everybody else to the punch). Outsourcing the building of infrastructure while you focus on the unique/hard part.

I'm not seeing how you can't see the parallels with startups.


In that case any high-priority military intelligence project is "like a startup". Why say that it's run like a startup as opposed to just saying it was run like a high-priority military-intelligence project?

The GP suggested that a reason for the success of the Manhattan project was that it was run like a startup, whereas it seems more illuminating to point out that it was a massively funded military project in wartime. I was curious if there was some more specific rationale for the startup comparison


In a startup—especially one that is heavily funded, like a government—roadblocks that can be resolved by eliminating paperwork, cutting through bureaucracy, or simply by paying money, tend to disappear.

No serious person can argue this being plausible today. Sam Altman will drop 10B at a blink of an eye if it means unblocking a major problem for OpenAI.

The government spends 10x that all of the time without anything impressive to speak of since the moon landing.


I'm baffled, I must be misreading your comment. Are you saying the government is like a startup or just that's heavily funded startups are like the government because the government is also heavily funded?

It's also really odd to me that you claim Sam Altman can do more impressive things with 10 billion than some government. I mean, have you seen a public transit system? A sewer system? A dam? Sam Altman could hand code a true AGI tomorrow and people would still need to flush their poop.

It's also worth noting that without the government there is no economic system that allows for startup investments, nor does the USD really have any value.


What about the overwhelming majority of startups, which are poorly funded ?


> Even your description sounds like a dtartup There was a hook to get the funding (easy to get weapons funding in wartime

Huh? Decision was made by the president Roosevelt, advised by government agencies and even Albert einstein.

This is like top-down, command economy with elements of technocracy. There are no investors, no markets, etc.


Nuclear physics had just "cracked open" and there were lots of highly promising prospects to pursue. You can't recreate that historical situation by switching from agile to scrum, or from scrum to agile.


Sounds like Eric Weinstein'a take (which I appreciate) on theoretical physics research.


I also like Eric Weinstein, but the non-physics aspects of his perspective are essentially the philosophy of Curtis Yarvin distilled and presented in a way that's more digestible to classic liberals.


> The Manhattan project was a government project that was run like a startup.

Run directly by the government employees, and directly hiring the nessesary people to work for the government. today we would call that socialist or some such, opposite of startup?

> If such a project happened today, academic scientists would be trying to figure out ways to bend their existing research to match the grants

You missed important step- such a project today would start by the government handing out grants, stimulus and tax breaks. If they directly hired the nessesary people, there would be no fighting for grants. People do what incentives demand


Lots of people doing research find this depressing to the point of quitting. Many of my peers left research as they couldn't stomach all this nonsense. In experimental fields, the current academic system rewards dishonesty so much that ugly things have become really common.

In my relatively short career, I have been asked to manipulate results several times. I refused, but this took an immense toll, especially on two occasions. Some people working with me wanted to support me fighting dishonesty. But guess what, they all had families and careers and were ultimately not willing to do anything as this could jeopardize their position.

I've also witnessed first-hand how people that manage to publish well adopt monopolistic strategies, sabotaging interesting grant proposals from other groups or stalling their article submissions while they copy them. This is a problem that seldomly gets discussed. The current review system favors mono-cultures and winner-takes-it-all scenarios.

For these reasons, I think industrial labs will be doing much better. Incentives there are not that perverse.


> I've also witnessed first-hand how people that manage to publish well adopt monopolistic strategies, sabotaging interesting grant proposals from other groups or stalling their article submissions while they copy them. This is a problem that seldomly gets discussed.

Agree. Everyone has heard about the extreme fraud cases, but the casual toxicity that the pressure cooker environment elicits is rarely discussed and probably a far larger problem. I say that as someone who has spent too much time in this environment and never witnessed outright fraud - that I know of.


So do you believe the author of the OP could honestly omit from their article the things you (and many others here) talk about? Could they be naive?


Nah, I think the article has good intentions. And rampant fraud is important to address.

But as others said, addressing smaller shenanigans is also crucial to steer things in the right direction.


The fact that it can still be considered science when intentional fraud is involved is a huge problem itself.


Science, at its core, is supposed to be a method of systematically seeking truth through rigorous experimentation, observation, and reasoning. When intentional dishonesty enters the equation, it undermines the entire purpose of the scientific endeavor


Science, at it's core, is figuring out how to make better TVs and phones with longer battery life. Truth-seeking should be done during your lunch break. If you have one.


The scientific method itself is a respond to fraud and deception (intentional, delusional, accidental).


>Charlie Munger's observation

The more I've read about finances the more I've realized it can also be applied to many other things in the world due to its sheer objectivity.

On the other hand, I've also noticed most if not all of it is based on contexts and data from the mid 20th century. Interesting how that turns out.


I'd say in general - concepts from one domain sometimes can be applied/novel in other domain


> Academic scientists' careers are driven by publishing, citations and impact.

Publishing and citations can and are gamed, but is impact also gamed on a wide scale? That one seems harder to fake. Either a result is true and useful, or it's not.


Actual real world impact? Hard to game. But, nobody measures that. Everything that's tracked is a circularly defined success metric (you're successful if other academics consider you successful).


The article mentions that there are two drugs that resulted from this research. One of them failed a trial recently. Nobody knows if this fraud means that the drug never could have worked, or if this was just bad luck. So yes: people do measure real-world impact. It's just that it takes a very long time and there are plenty of confounding factors, since even non-fraudulent drug research can fail.


That's true for the rare case that something turns into a professionally run and controlled trial (incidentally there's evidence for many published clinical trials also being fake). But very little research output is ever tested that way. Most doesn't even claim it could have real world impact in principle.


What you're observing is that most research results don't have much impact. The ones that get replicated are definitionally the ones that matter the most, since they lead to controlled trials. In principle this is a pretty reasonable way to allocate limited resources.


Convince (lobby) a politician that your research will save the trees/whales/global warming/end starvation/any other fear inducing thing and get funding to bribe (lobby) more politicians to further your “research” until anyone would be a fool to question the science.

I feel like this same story happens every year and people are surprised. I often wonder how many “academic” or “scientific” quotes to not conflate valid research and study) need to happen before it becomes as distrusted as politicians.


Academic scientists aren't lobbyists, and politicians aren't giving us money. The funding institutions set research goals. Those funding institutions (staffed by people who actually understand science) don't conduct the research--they choose the winning research plans to fund to achieve their stated goals (1 in 12 applicants, last time I tried). These bureaucrats write budget justifications for Congress--just as the military, Social Security Administration, VA, etc, do. And frankly, compared to those entities, our $40-something billion of the $6 trillion budget is pathetic.

https://static.nationalpriorities.org/images/charts/2021-cha...

https://www.nationalpriorities.org/budget-basics/federal-bud...


Oh, yes. Repackaging and reframing of data - as mentioned in the OP article too - is a common practice for farming impact and article numbers too.

Why do novel research when you can just partner up with your friends, bring your data and combine with theirs for the umpteenth paper on the same thing, then wade into spamming down the submissions in every academic journal with impact in your field! If it published once in this journal, surely the 3rd recombination will too (and more often than not... it does.)


Impact here actually means citations. They clumsily say citations twice. Underlining the field of bibliometrics is the idea that citations correlates with the impact the a paper has on scientific discourse. It's intentionally vague because its hard to say what even highly cited papers objectively achieve.

https://en.wikipedia.org/wiki/Citation_impact


Oh you sweet summer child. Let me introduce you to the world of ML reviewer/citation cartels!


How much do these same incentives apply to climate science, where huge amounts of money are now in play?


That attitude coincides with current delusion in our society that science is perpetuating a fraud at the level of religions whose leaders are trying to control their flock for financial and sexual gain.

A broken system that incentivizes fraud over knowledge is a real problem.

An assertion that scientists chase the money by nature is a dangerous one that will set us back to the stone age when instead we should be traversing the space as a whole.


> Similar - when I was younger, I would never have suspected that a scientist was committing fraud.

Unfortunately many less bright people seem to interpret this as "never trust science", when in reality science is still the best way to push humanity forward and alleviate human suffering, _despite_ all the fraud and misaligned incentives that may influence it.


In defence of the "less bright people" or deplorables as others have called them - they are deeply suspicious of Science(tm) used as a cudgel.

They intuit that some parasitic entity or entities has latched on to Science and is co-opting it for it's own gain to achieve it's own purposes which run counter to the interests of the people.

The heavy handed Covid response and censorship is a prime example of that.

The whole system has been corrupted and therefore it is not possible to have a de-facto assumption of good faith of the actors.


I think this comment comes across as slightly ignorant.

Many examples exist where a misguided belief in scientific 'facts' (usually a ropey hypothesis, with seemingly 'damning' evidence), or a straight up abuse of the scientific method, causes direct harm.

Suspicion is often based on facts or experience.

People have been infected with diseases without their knowledge.

People have been forced to undergo surgical procedures on the basis of spurious claims.

People have been burnt alive in buildings judged to be safe.

And look at Boeing.

No one has a problem with science itself per se. Everyone accepts the scientific method to be one of our greatest cultural achievements.

But whether one is "less bright", or super smart, we all know we as humans, are prone to mistakes, and are just as prone to bend the truth, to cover up those mistakes.

There's nothing plebeian about this form of suspicion. In fact, the scientific method relies on it (peer review).


> No one has a problem with science itself per se. Everyone accepts the scientific method to be one of our greatest cultural achievements

This is just wrong and naive. You can be happy if a majority of people agree to this.


As written, possibly. Taken literally, it's full of holes.

But if you're not a pedant, I essentially mean that most parents will vaccinate their children, many passengers will book flights, and a majority of the citizens in a population do respect their officials (etcetera).

And I think if you were to dig deeper than this, and test that hypothesis with... well... a scientific experiment of some kind, the result would probably support it.

But a good number of people will naturally question the outcome!


You're not wrong, but people who oppose "science as a cudgel" tend to support "religion as a cudgel", and don't see a difference between science and religion, except that one is the Yellow team and one is the Purple team, and they have a preferred color.


When the public is told to “trust the science”, it is no longer science and is now religion.


This happened all day long during Covid. The answer was never “we don’t know yet” which was at least honest but instead it was always “just trust us”. Exactly like i use to hear from preachers growing up.


The difference between public health and basic research. To stop an epidemic in flight one has to go with ones best guess and that saves lives so they do it.


We weren't told it was the best guess to save lives. We were told it was the science, and this science was very fast to adapt to alarmist narratives while being incredibly slow (sometimes taking years) to adapt to reality.

And that's before you take into account things like BLM rallies being encouraged during COVID while less politically correct gatherings were banned or decried as "super spreader events." [1]

[1] https://www.politico.com/news/magazine/2020/06/04/public-hea...


Public health also studies the most effective communication strategies for sharing public health info even in the face of uncertainty. I bet they will be more explicit about uncertainty next time based on the last time, but that is why they didn't say it. The field started out with a person breaking the handle to an invented ell, not even using communications but taking physical actions to stop the spread.

As far as the speed of revisions, my memory is the initial advice to wash your hands so much faded within months, as well as the wash your groceries advice. It took the US some time to notice how effective masks were in Japan, and push for masking. And now good air filtration for public spaces is gaining momentum, even as the deaths from COVID declined.

And you aren't a bit alarmed anytime a novel virus hits an immunologically naïf population, well you should be.


> I bet they will be more explicit about uncertainty next time based on the last time

This is highly euphemistic. We were explicitly lied to for our own good, over and over again. Including about masks being ineffective early on (to save them for healthcare providers), about ventilators, about the effectiveness of the vaccine, about the vulnerability of normal populations as compared to older ones, the plausible origins of the virus itself, etc. It was over and over again. And again--that's without the injection of left-wing politics as noted in my previous comment.

> As far as the speed of revisions

We had kids out of schools for years. An entire generation is now significantly behind comparable cohorts, and the only reason we even stopped is because the opposition eventually made it untenable. We forced many small businesses to fail and gave out interest-free loans to many others that never need to be paid back. People were forced to get vaccinated or lose their jobs.

I feel like our response to COVID-19 (at least in the US) is like a litmus test for how well one accepts authoritarianism. If one lived through that as an adult, I don't know how one would trust what the medical establishment says next time around, except to trust that it's what they want the unwashed masses to think.


I’d just like to add that the epidemic wasn’t stopped at all. Everyone still caught COVID.

Were outcomes worse than they otherwise would have been? That’s an impossible question to answer. Are there serious studies on the impact of public health interventions?


I saw one study of relative death rates that concluded if all US has followed California standards, about 800,000 fewer people would have died. And remember "flatten the curve"? The goal isn't to stop everyone getting the virus but to slow it down enough that the health care capacity is sufficient. A quick google finds this study and a few more recommended: https://www.nature.com/articles/s41598-023-31709-2 even the way the virus isn't killing as many vulnerable people as we get it the Nth time after M vaccinations is a success if the lower the curve strategy.


Even if you still be the fiction of the vaccine saving lives. It was the life of an obese 80 year old with 3 other comorbidities ... for a few months.

The lockdowns resulted in inflation, and caused a lot more poverty and deaths worldwide since. And retarded the normal development of many children.

And we still don't know what kind of long term damage the vaccine caused. My friend that trusted the science, and took every booster, only stopped when she got myocarditis.


It's like paid clergy of old, except we now call them scientists. Sometimes they reveal truths. Most of the time, and most of them are kind of useless. It's self serving bullshit, served by ever growing bureaucracy.


‘Heavy handed Covid response’ - lulz. What other emerging pandemic has been so lightly handled?

We used to forcibly quarantine people in their homes at gunpoint for measles. Smallpox? Even crazier.

Hell, we didn’t even shut down international travel until well after it was plainly obvious it had spread well past the point it would matter.


I attended the opening talk of a local science festival.

The speaker was a psychology researcher from UK who flew here for that, and the talk was about conspiracy theories. When they introduced her they stated that she wouldn't accept any questions from the audience.

This was received with boos and shouts that it was not real science.

She then proceeded to bundle all the conspiracy theories together. Going from "the government is doing something bad" to "earth is flat".

After that talk I can really believe that the bullshit conspiracy theories are made up and spread artificially so that anyone that comes up with any conspiracy theory can be shushed as a crazy person.

But… in reality conspiracies do exist. One can make a theory and then test if it's true (or get killed/imprisoned by the government while trying).


This actually may be true although somewhat indirectly with respect to at least one well-known conspiracy belief.

If I recall correctly, the Flat Earth Society was originally concieved as a prank intended to lampoon actual conspiratorial groups (I.e. Nasa faked the moon landing, JFK was a mob hit etc)

But through some combination of timing, convimcing execution, and media interest coincidentally developing in the same direction resulted in the supreme irony of an unserious sham cult spawning an unironic counterpart community which rapidly outgrew and ultimately succeeded it.


That's the same shape as the OBEY clothing line, incidentally.


I always struggled with ‘cospiracy theory’ being used as an attack.

Yes, that’s exactly what they are, upfront, allegations of a conspiracy. And some of them are correct,


How are "heavy handed covid response and censorship" a prime example of that?


It's not that it was heavy handed. But it was completely nonsensical.

For example here restaurants were open, but they had to close at 19. So instead of spreading the clientele over more hours, they were always 100% full.

Also, they CUT ⅔ of public transport rides, so they were incredibly overcrowded. People with real jobs that can't be done from home still had to go to work. BUT they put stickers on the floor telling people to keep distance. Also hired people to be at crowded stops to spray hand sanitizers on who wanted it, and tell people to keep distance (while seeing them having to push their way in).

In general all the restrictions were about the "having fun" stuff, but not about the "go to work" stuff. Even companies had no obligation to let people who could work from home stay at home. Some companies kept having their offices full.

Oh and let's not forget the recommendations of staying home if you so much as sneezed. But you wouldn't get paid. How did they expect people to pay their rent?

I could go on for hours with this. The bullshit measures that were marketed as "what the scientists are telling us to do" did a lot of harm to the trust that the general population puts into science.


A decent chunk of the pandemic response was politicians power tripping in the name of The Science and later having to roll things back, either because of public backlash (eg hotlines to encourage snitching on their neighbors), because it was actually illegal (requiring all large businesses to have their employees vaccinated or tested weekly), or because of politics (initially telling the public that masks were ineffective, then tripling down on mask mandates, Harris saying the vaccine could not be trusted based on Trump talking about its efficacy).

There was also dumb stuff like social media suppressing mention of covid, even to this day youtubers use euphemisms to refer to that period.

To me it seems perfectly understandable how people who aren't actually involved in science might mix up The Science and actual science after all the political nastiness of those years, especially when we add on top all of the awful pop science reporting from the past decades.


Preventing people from working if they didn't get a covid vaccine was a bit heavy handed.

And saying it was likely made in a lab in China is kind of censored to this day. I think partly because the science community doesn't want to take flack for doing risky stuff and killing millions.


> Preventing people from working if they didn't get a covid vaccine was a bit heavy handed.

Nobody did that. They prevented you from working with me

Nobody here or there was forced to get a vaccine. But if you refused it was right to shun you

Freedom is about more than the individual. We as a group should be free from the consequences of individual actions


Wrong they did that to me. Not a small company either. Yeah you can play games with “we won’t fire you, we’ll just stop paying you and won’t allow you to work”.

It’s like they watched Office Space and thought they’d “Milton” everyone.

> Freedom is about more than the individual.

The individual is what it starts with.


The Biden administration had OSHA make rules to force employers to make their employees get the vax. The Supreme Court stopped them.


My employer of 750k people made it very clear, upload proof of vaccination or be fired. That’s a fact.


That was GP's point. Being fired != being prevented from working. You are still free to get a job at a different employer that has different policies.


Exactly. That kind of Machiavellian games is what they played. See, we’re not firing you. We just won’t pay you.


You're splitting hairs. If people didn't get vaccinated during covid they became social pariahs. People were literally calling for their deaths.


My point as a Brit was it seemed a bit heavy handed in the US. In the UK we didn't really have that and still got most people vaccinated.

It would have made more sense if the vaccines actually stopped catching it and transmission but they don't really, they mostly just seem to reduce the harm when you catch it. In terms of not spreading it to others you are better isolating than relying on the vaccines - I've had that as a practical issue with my late 80s mum who I visit. Although I've had 4 jabs I've still had it caught it twice since, and have avoided giving it to her by testing if I feel ill and staying away. Which is kind of to say some of the politicians views on it were heavy handed and a bit iffy scientifically.


Workers are free to work or free to starve.

-- Karl Marx


This is misinformation. My employer mandated it for all employees, and my entire IT organization 100% WFH, coming in wasn't even an option.


From a philosophical perspective, I don't see how the vaccine mandates for public jobs is appreciably different than vaccine requirements for public school that already exist.

As far as the China lab goes, there were plenty of scientific papers that studied the China leak theory, though I personally don't know what they found.


The difference between the vaccines you're talking about lies in their development time: The one in school have been (tested) around for decades before getting mandatory.


The COVID vaccines have now been around for 4 years now, and there is no evidence they are appreciably more dangerous than those other vaccines that took longer to develop.


>They intuit

They don't need to intuit it; they're outright told not to trust the science, by the parasites who are criticised by other scientists. Like the asbestos industry.


I try to distinguish between "the scientific process" and building scientific consensus. As rigorous as the scientific process may be, building consensus is always a messy and human thing.


Agree. The fact that we are seeing this kind of discourse within the scientific community is in my opinion a great argument for the scientific method.


Why? Some guy writing an op-ed saying how frustrating it is that science is full of fraud is a great argument for the scientific method? There have been people writing articles like this for over 20 years if not much longer about all kinds of fields. Nothing ever happens, nothing ever improves, it never goes beyond people saying "tut tut how terrible". This sort of thing is entirely predictable and will keep happening, over and over again. On the current course, there will be articles just like this one being discussed in another twenty years from now.


I don't think Derek Lowe is frustrated that "science is full of fraud", this is likely editorialization on your part. It seems that it stems specifically from Masliah, who is common across all papers in the dossier. Granted, Masliah appears to be prolific, so this is admittedly a large issue in the peer review and verification structure in this field.

To put this into context though:

Let's begin by supposing that fraud exists in all ventures where people stand to gain, which I don't think is controversial at all, especially not in this comment section.

In light of this assumption the fact that this all came out in the first place is proof that being a luminary does not make you immune from investigation. That this happens 'over and over again' simply means that eventually we catching this fraud. The fact that the scientific community is constantly trying to reproduce and verify is why these become public in the first place.

So on the contrary, it's not that nothing ever happens or nothing ever improves. There will be articles like this one in twenty years because there will still be fraudsters in twenty years, and there will still be scientists working to verify their work.


I don't think it's true that eventually we are catching this fraud :( This keeps happening because so much is out there, it doesn't follow that all or even most of it is being caught. Even a tiny fraction of a fraction of a percent being caught would yield a constant stream of such stories. I have a collection of articles on my blog dating back years that cover various fraudulent papers in different fields, and even whole fields in which the bulk of all papers are based on fraud (e.g. the literature dealing with misinformation bots on Twitter). None of them have ever been retracted or even had any of the problems be acknowledged outside of the blogosphere.

It's really hard to understand the scale of the problem until you wade through it yourself. Fraud is absolutely endemic in science. Dig in and you can easily uncover bogus papers, and none of them will ever be acknowledged or retracted. In particular there's a nasty attitude problem in which reports of fraud from outside the academic institutions will frequently be written off as "right wing" and thus inherently illegitimate. This can happen regardless of the nature of the criticism or whether it's in any way political. Literally, things like bug reports or reports of numbers that don't add up can be discarded this way. Thus they implement an unwritten rule that only academics are allowed to report fraud by academics, and of course, they are strongly incentivized not to do so. So Lowe is correct. It's really a mess.


Took a quick look at your blog. Some of those examples are quite bad, similar to the fake gels in the Science article. Do you know if any of them gained particular attention?


There was a paper written by a couple of Germans on the Botometer stuff. The first version of the paper cited me, they spent a year or two trying to get a version published and it was eventually cut down and got into obscure social science journal and ignored. Nothing ever came of it really.

The stuff on PCR test false positives went somewhat viral and got some attention from outside of academia but of course it was during COVID so it was ignored by the institutions.

The stuff on epidemiology and the history of Neil Ferguson was triggered originally by an article in the Telegraph. It went no further than that.

The fake lesion surgery got noticed on Twitter and I think it was eventually retracted but the perps still work for the NHS.

The paper mills and fake biology papers gets published about occasionally in mainstream press. But nothing happens.

So... no. Not really.


How's this for scientific method. The response to "not trust science" is the inductive step of a model that is validated by the fucking evidence.


not sure what you're getting at


I'm not sure the two uses of "science" in your post are using the same meanings of the word.

Science is the name given to a few different processes, a body of knowledge and statements by designated spokespersons. Each of these have different flaws and failure modes in different environments and domains.


Appositive comma, or missing Harvard comma?


The problem is simplifying into the same word "science" both the scientific method (with a pretty good track record, though it does have some blind spots), and groups of people claiming to engage in it (the tendency of which is to become corrupt, the faster the more powerful they are).


Also science: There are more new cases of cancer in the United States now than there have ever been before.


> There are more new cases of cancer in the United States now than there have ever been before.

Sigh

That is because antibiotics, essentially

The mechanism by which antibiotics cause cancer is they stop you dying from bacterial infection, once a huge (biggest?) killer.

You still have to die of something....


There’s also improved testing: previously old people would die of “unknown natural causes”, now they’re diagnosed as having cancer.


Culture of being proud of ignorance is on full display in the replies to my comment, btw.

Exactly the kind of less bright people that I inferred about. People so devoid of common sense, that they utterly reject anyone who applies systematic thinking. Unreal.


Already done: https://imagetwin.ai/


I believe the phrase is "trust but verify."


> Unfortunately many less bright people seem to interpret this as "never trust science"

Unfortunately many "smart" people insist on telling "dumb" people how to think instead of having the introspection and humility to examine where we've gone wrong and spending a lot of time and effort on fixing it.

No, easier to gaslight the idiots


Exactly. “This is bad because dumb people won’t believe us.”

Not “This is bad because it undermines science, is lying, and unethical, regardless of what people think.”


A lot of people are working to fix the K-12+ educational system which is the root cause of many stupid people, but beyond that, it's objectively hard to fix stupid.

Most people, stupid or otherwise, wouldn't take a critical thinking course, for example. Many would have no time for it, to say little of motivation. Fewer are proud of being stupid and will shun anything they consider "intellectual".


This is a bad take. Because even the craziest of crazies (the flat earthers) are actually doing the scientific method by running experiments to test their crazy conjectures. What makes them crazy is that they have a poor sense of discernment about when it's worth it to trust authorities and when it's not; they're not dumb, and education is not really failing them (in the sense that they "aren't being taught what science is"). They are arguably better scientists than "trust the science" folks because at least they are getting out there and moving atoms to test shit.

On the other hand intellectuals have poor discernment too, they overly trust the literature and the interpretation of working scientists. These two phenomena are two sides of the same coin, and the flat earthers/antivaxx etc crazies are directly downstream of the "trust the science" bad behaviour, especially since the education system has taught them what good science is and they are rightly perceiving that good science is not being done.


> This is a bad take.

This is a bad start to a post on HN. Less confrontational would be better.

> Because even the craziest of crazies are actually doing the scientific method.

I can name several conspiracy theories off the bat which I've heard repeated by people who have not tested the theories. The vast majority of them, at best, found a video of someone on the internet with "dr" in the username. The percent of "jet fuel can't melt steel beams" folks who have actually personally tested whether steel can lose structural integrity at such temperatures, is absolutely minuscule.

> They are arguably better scientists than "trust the science" folks because at least they are getting out there and moving atoms to test shit.

Reading something on the internet is moving photons and electrons at best, but I can't speak to whether a given Parler thread which convinced someone of a given conspiracy was read inside or outside.


Let go man. Science is rotten. I spent a decade doing science. You wouldn't know unless you were there (and half the people who were are so wrapped up in the holiness of science as part of their identity that they can't see the rot).


Are you sure you replied to the right post? This reply seems to have nothing to do with mine.


All science does is show us how to move a whole bunch of piles of shit over into one big pile of shit, off in the corner. Or perhaps on to an unsuspecting group of poor people because, the burden demands to be he held and somebody has to hold the bag of shit. Right?

We may interpret this as convenience... But the tragedy of the commons says that we can't even have science if someone isn't holding what it is we don't want to be holding... I'm not saying I didn't love science or not think it's super interesting or anything... Can we really say it alleviates suffering or does it displace it for one group of people until a new problem comes in and takes that one's place? How many people here will be holding the bag of shit tonight? USA numba 1!!!


At some point, the good scientists leave and the fraudsters start to filter for more fraudsters. If that goes on, its over- the academia has gone. Entirely. It can not grow back. Its just a building with conman in labcoats.

My suggestion stands: Give true scientists the ability to hunt fraudsters for budgets. If you hunt and nail down a fraudster, you get his funding for your research.


All the fraudsters will nail the honest ones before they know what hit them.


I mean, the replication crisis had come and gone, about 5 years now. The fraudsters are running the place and have been for at least the last half decade, full stop.


That is a ridiculous exaggeration. Yes, like in every walk of life, fraud happens. However, the extreme success of academic science shows that most of it is real, honest, work.


It is field dependent but I'm not entirely against what the parent said. I work in ML and I am positive that all this is going on[0]. There's lots of true believers though and that's what makes things extra hard. Sometimes the fraudsters take over by making the system become incompetent and everyone is in good company. In this was fraud isn't committed with intent, weirdly enough.

Just look at all the ML reasoning papers. Wither you believe LLMs reason or not, an important factor you have to disentangle when trying to prove this is what data the models were trained on. To distinguish memorization from reasoning. You won't find this analysis because it's almost impossible given that the data is a trade secret, even by Meta.

This year at ACL a paper (mission impossible language models) won best poste paper despite their results running contrary to their claim, and very obviously so too.

Or there is the HumanEval paper which proposed that they created a data set which was not spoiled because they "hand wrote" over a hundred "Leetcode style problems". 60 authors and they didn't bother to check... But why would you check when the questions are things like "calculate the mean". What fucking programmer thinks there isn't python code on GitHub pre 2021 that: calculates the mean, takes the floor, checks if a string is a palindrome, calculates greatest common devisor, or any similar question. How did this become an influential dataset‽

[0] the big reason I'm upset is because I love the field. I'm not in it for money. I'm in it because I grew up on Asimov books and because I want our community to work towards AGI. But now every person that can do print("hello world") feels that they can lecture me, a published researcher about what these machines do while they talk about the Turing test (lol, what is this, the 60's?) and how they're black boxes (opaque, but certainly not black). I'm fine with armchair experts, but not when they come in swinging with a baseball bat


How so? The fraction of academic science that is applied to anything, anywhere, with clearly identifiable impact is both a tiny fraction of academic science, and also often detrimental quality of impact.


Pretty much every industry functions on a foundation of basic scientific knowledge discovered in academic labs, run by honest people trying to understand the natural world.

Fraud happens. Bad theories happen. The slow turn of scientific wheels takes centuries to crush them but it will always win. Profit doesn't turn those wheels. Our entire modern lifestyle is the impact.


Is it possible that society is the thing with turning wheels and knowledge is the thing getting crushed underneath it?


That how it was, and it can easily go back to that.


Thats not true entirely. There is research, with huge impacts and the money leveraging- keeps it brutally honest. Nothing makes it out of a lab and into a fab, without certainty of the method working at least in lab conditions reproduceable. There are many billions, but not that many billions.


> nothing makes it out of a lab and into a fab, without certainty of the method working at least in lab conditions reproducible.

The article describes multiple full clinical drug trials both completed (inconclusive==can't prove harm, effect is so small that benefit cannot be excluded) and ongoing that are fundamentally built on the fabricated results which literally do represent many billions of private investment.

Ultimately research falsification is a con game, and you seem to have faith in something magical about "money leveraging" that smokes out cons. I do think shit eventually hits the fan in the market since reality affects the market, but it's not because the market is more rigorous than "science". Ultimately, investors are not experts and the are listening to the same people who do "peer review" and didn't notice (or ignored) the fraud in the first place.


Fair point, thank you for your time and well crafted argument.


It's really easy to verify that this claim is pure nonsense - we still have very real, astonishing scientific progress.


But is it proportional to the investment of time and people participating in it. If you take the amount of money/people invested into the sciences at a point in time , lets take: https://en.wikipedia.org/wiki/Solvay_Conference

And then, put the number of contributors and the rate of progress against one another, my guess is that you would see a massive slowdown of progress, so massive actually, that explenations about the slowdown abound. There is the "all easy apples have been picked" theory, the "only life&death systemic competition forces contributors to produce good science" theory and a ton of others. All basically trying to explain the same phenomena- which could also be explained by: "hackers, hacking hackers, hacking processes, leave no financial substance behind to have people who actually do the scientific leg-work."


It becomes a survival bias: if people can cheat at a competitive game (or research field) and get away with it, then at the end you'll wind up with only cheaters left (everyone else stops playing).


You could improve the situation by incentivizing people to identify cheaters and prove their cheating. If being a successful cheater-hunter was a good career, the field would become self-policing.

This approach opens its own can of worms (you don't want to overdo it and create a paranoid police-state-like structure), but so far, we have way too little self-policing in science, and the first attempts (like Data Colada) are very controversial among their peers.


As they say: the scum rises to the top, true for academia, politics etc, any organization really.

Quote: "The Only Thing Necessary for the Triumph of Evil is that Good Men Do Nothing"

My own nuanced take on it:

Incompetent people are quick to grab authority and power. On the other hand principled, competent people are reluctant to take on positions of authority and power even when offered. For these people positions of power a)have connotations of a tyrant b) are uninteresting. (i.e technical problems are more interesting) . Also the reluctance of principled people to form coalitions to keep out the cheaters, because they are a divided bunch themselves exacerbates the problem, where as the cheaters often can collude together (temporarily) to achieve their nefarious goals.


Those you'd want to lead recognize leadership is a responsibility often times marked by personal sacrifice.

Those you'd never want to lead mostly regard leadership as a privilege used for personal gain.


> Those you'd want to lead recognize leadership is a responsibility often times marked by personal sacrifice.

True. It's not a coincidence that they say: it's very lonely* up there.

There are some ways around this, but they are not easy, and perhaps even impossible.

(*=ironically and paradoxically,the cruelest dictator is also a very lonely person)


Loneliness causes suffering, and suffering does inhibit empathy, or so I read once in a pop sci recounting of a study. It would also make sense that a lack of empathy feeds back into the loneliness by harming relationships. Cyclic determinism, why are you everywhere in life?


I would add c) often entail incalculable risk for any who aren't already corrupt enough to be climbing into bed with other evil powers or blackmailing, extorting, or exploiting their way into a safe haven and a golden parachute.

Who willingly remains vulnerable before the Sword of Damocles?


That is a very astute observation. Unfortunately a situation like that cannot be fixed.

Only an organization that is built ground up, which NEVER compromises on the quality of people, as they grow, is the only way out. This is easier said than done, because this often means that people who build organizations, will have to spend nearly 100% of their time looking and screening potential candidates for 'leadership' positions. What generally happens is that when organizations grow they have to hire people in order to keep business running, and they often compromise.

Now there is a way out of this problem, if the founders of an organization are rich. In which case they can spend 100% of their time screening candidates, without having to worry about growing the business. But even this task is not easy, as one can often go on for years before one finds a candidate who has sufficient integrity, wisdom and intelligence, and perhaps most importantly willing to exercise power/authority when needed.


This is why cheaters should never ever ever be allowed to play again with fair players, only with cheaters


Cheaters playing with cheaters is what we have yet. The problem is not in the cheaters, is in the game that deliberately select for cheaters.


And thus we have the Earth. Where all looks like a broken MMO in every direction. Everybody refuses to participate, because it's 100% griefers, yet nobody can leave.

Business: Can you get a law written to command the economy to give you money or never suffer punishments? Intel fabs (https://reason.com/2024/03/20/federal-handout-to-intel-will-...), Tesla dealers (https://en.wikipedia.org/wiki/Tesla_US_dealership_disputes), Uber taxis (https://www.theguardian.com/news/2022/jul/10/uber-files-leak...), ect... Are you wealthy enough there's nothing "normals" can really do? EBay intimidation scandal (https://en.wikipedia.org/wiki/EBay_stalking_scandal).

Economic Academia: Harvard Prof. Gino (https://www.thecrimson.com/article/2024/4/11/harvard-busines...)

Materials Academica: Doping + Graphene = feces papers (https://pubs.acs.org/doi/pdf/10.1021/acsnano.9b00184) "Will Any Crap We Put into Graphene Increase Its Electrocatalytic Effect?" (Bonus joke! Crap is actually a better dopant material.)

Gaming: Roblox double cut on sales (that people mostly just argue about how enormous it is, because the math's purposely confusing) (https://news.ycombinator.com/item?id=28247034)

Politics: Was Santos ever actually punished?

Military: The saga of the Navy, Pacific Fleet, and Fat Leonard (https://en.wikipedia.org/wiki/Fat_Leonard_scandal) "exploited the intelligence for illicit profit, brazenly ordering his moles to redirect aircraft carriers, ships and subs to ports he controlled in Southeast Asia so he could more easily bilk the Navy for fuel, tugboats, barges, food, water and sewage removal."

Work: "Loyal workers are selectively and ironically targeted for exploitation" (https://www.sciencedirect.com/science/article/abs/pii/S00221...)

There's others, that's just already so many...


Anything not Forbidden is Compulsory.


I used to work with someone up until the point I realized they were so distant from any form of reality that they couldn't distinguish between fact or fiction.

Naturally, they are now the head of AI where they work.


Hacker news is completely flooded with “AI learns just like humans do” and “AI models the human brain” despite neither of these things having any concrete evidence at all.

Unfortunately it isn’t just bosses being fooled by this. Scores of people push this crap.

I am not saying AI has no value. I am saying that these idiots are idiots.


The reality on the ground (for me) has been refreshingly sane.

I work at a company with a substantial BI/ML footprint. Our head of research was tasked with evaluating the applicability of LLMs to either our product or our daily workflows.

To date the consensus is that there isn't much there for our product, that integrating LLMs into our models would introduce more problems than it would solve, and that we should cautiously experiment with allowing engineers to use tools like co-pilot, provided we take adequate steps to protect our IP.

It was a reasonable exercise carried out by a reasonable person for reasonable reasons (from my POV). I imagine this isn't an uncommon story? Color me pessimistically optimistic?

For practical reasons we need to have an answer to the buzzword bingo when communicating with customers/company ownership, and now we do. Now we don't talk much about it because there isn't much to talk about.


Refreshing but rare; usually this kind of eval gets done by someone excited to do it because they've already been "intellectually captured" by the hype.


What evidence for „AI models the human brain“ do you want? Isn’t a neural network pretty clearly a simplified model of the working of the human brain? What is there to prove?


Neural networks are not a model of the working of the human brain. They are based on an extremely simplified approximation of how neurons connect and function (which while conceptually similar is a terrible predictive model for biological neurons) and are connected together in ways that have absolutely zero resemblance to how complex nervous systems look in real animals. The burden of proof here is absolutely on showing how LLMs can model the human brain.


> They are based on an extremely simplified approximation of how neurons connect and function (which while conceptually similar is a terrible predictive model for biological neurons) and are connected together in ways that have absolutely zero resemblance to how complex nervous systems look in real animals.

Well then you already think it’s a model. Being a simplified approximation makes it a model.

Just as I said in another comment, a SIR model also models infection behavior of COVID in humans, even though it is extremely simplified and doesn’t even look at individuals. It’s basically just a set of differential equations that give a curve that looks like infection numbers. But that is exactly what makes it a model. It’s a simplified abstraction.


Neural networks are much closer to modeling a brain than other approaches to AI, ie symbolic reasoning. There will always be differences (it's machine, not meat), but it's fair to say the approach is at least "brain like".

Your position sounds like a No True Scotsman fallacy.


Sorry if it came across as non-falsifiable, that was not the intent.

Neural networks do not directly encode high-level reasoning and logic, yes. But in the spectrum of “does this model the actual functioning of an animal/human brain”, they lack both a 1st order model of how biological neurons and neural chemistry behaves, but also lack anything like the multiple levels of structural specialization present in nervous systems. That’s the basis for my argument.


That's true, but we also don't know that the multiple levels of structural specialization are necessary to produce "approximately human" intelligence.

Let's say two alien beings landed on earth today and want you to settle a bet. They both look weird in different ways but they seem to talk alike. One of them says "I'm intelligent, that other frood is fake. His brain doesn't have hypersynaptic gibblators!" The other says "No, I'm the intelligent one, the other frood's brain doesn't have floozium subnarblots!"

Who cares? Intelligence is that which acts intelligent. That's the point of the Turing test, and why I think it's still relevant.


I think we are arguing on different tracks, probably due to a difference in understanding of ‘model’.

There are arguments to be made, including the Turing test, for some sort of intelligence and potential equivalence for LLMs. I am probably more skeptical than most here that current technology is approaching human intelligence, and I believe the Turing test is in many ways a weak test. But for me that is different, more complex discussion I would not be so dismissive of.

I was originally responding to the claim “isn’t a neural network a simplified model of the working of the human brain”. A claim I interpreted to mean that NNs are system models of the brain. Emphasis on “model of the working of”, as opposed to “model of the output of”.


AI fanatics claiming to know SHIT THE TOP TIER NEURAL SCIENCE HAS NO FIRM IDEA OF.

We know there are neurons and that electric signals hit between them. That’s it. Literally everything else about your claim is bogus nonsense.


One clear piece of evidence would be ruling out "AI models the corvid brain" or "AI models the cephalopod brain" which might narrow it toward the human brain.

That it's functionally impossible to do either leads me to believe that "it models some form of intelligence" is about the best we can prove.


I don’t understand the standard of modeling you seem to assume.

Modeling a human brain, a cephalopod brain and a corvid brain aren’t even mutually exclusive if your model is abstract enough.

When I say „a neural network models a human brain“, I’m talking about the high level concept, not the specific structure of a human brain compared to other brains. You could also say that it models a dogs brain if you wanted to. It’s just the general working principle that is kind of similar. Does that not count as a model to you?

Edit: Here’s a simple example: I would say that a simple SIR model „models COVID infection in humans“. But that doesn’t mean it can’t also model Pig Flu in Pigs. It’s a very abstract model so it applies to a lot of situations, just like a neural network basically models the brain of every reasonably advanced animal.


I think a lot of people don't abstract their brain model when they say "models a human brain", or they'd say "models biological intelligence", etc. Specifically, I don't think there are any human traits in LLMs other than having mostly been trained on human outputs. They see tokens and predict tokens; very different sensorium from humans. There aren't any specific corvid or cephalopod traits either afaik.

Biological brains don't use gradient descent and don't seem to use 1-hot encoding/decoding for the most part.


Pointing out differences doesn’t mean it’s not a model, that’s what makes it a model and not a replica. Saying „A neural network is a model of the human brain“ doesn’t imply that it’s a direct simulation of the structure and scale of a human brain, it just means that the neural network is based on a simplification of how neurons in a brain work. That’s the entire claim.


How about "hallucinations"? They are exactly what students produce during exams when they don't exactly know the subject: plausible sounding but internally incoherent sentences.


> Modeling a human brain, a cephalopod brain and a corvid brain aren’t even mutually exclusive

Take an existing implementation, ChatGPT4 or whatever - is it closer to a brain of a rat or of Albert Einstein?

If you are not sure, then, we, it’s just have some sort of intelligence, not ‘model of a human brain’.

I would wager it’s closer to a rat.

Also the phrase implies that we understand the difference between human brain vs brain of an elephant. For some reason humans are more capable, it’s not just size. At the moment we don’t understand.


It’s probably closest to modeling a housefly, but that doesn’t mean it’s not also a model of a human brain. Being a model doesn’t require that it exactly captures every aspect and scale, it means that it tries to approximate the working principle. Just like a SIR model doesn’t really model how an infection in an organism works, but it still models infection behavior of COVID between humans.


IMHO, it's clear that no human can learn like AI. AI outperformed humans with huge margin in some areas already, while their performance is laughable in other areas.

Also, it's obvious that our brain is not built like AI models.

There are similarities between both human and current AI models, but there also huge differences, which doesn't allow easily map one to another.


Of course, that’s what makes it a model, not a digital twin of a human brain. But nobody claimed that, so having a roughly similar working principle is enough for me to call it a model of a brain.


No, it's not a model of a human brain, like bicycle is not a model of human legs. It's artificial intelligence. There are similarities, but we cannot use current AI models to study human brain: it's useless for that job.

We can create a model of human brain (Artificial Brain) using a bunch of AI models, of course, but it's not done yet.


Well it’s a model just like a Mindstorms NXT (This guy https://www.lego.com/cdn/product-assets/product.img.pri/8547... ) is a model of human bipedal walking. It’s very simplified and basic but that basic idea is there.

A model doesn’t have to be useful to study the thing you’re modeling, you can also just be interested in the output because it’s simpler than using the original thing. Modeling a human brain with a neural net and using it is simpler than directly simulating a human brain, so that’s what we do. Not being useful to study humans brain on doesn’t mean it’s not a model.


It's model of a humanoid robot. It's not a model of a human.


A model of a model of X is also a model of X. It absolutely is a very simple model of human walking. You’re just using „model“ in a very narrow sense that excludes many things that are commonly called models.


Models are used as orders of magnitude cheaper substitutes of real thing in learning, predicting, and so on, known as «modelling».

AI, in it current state, is not good enough to serve as substitute for human, or human brain, but good enough to serve as substitute of human level intelligence. At this point, we are able to model brain of a fly.

It looks like you are confused by similarity of scheme, model, and similarity.

A model needs a map to transfer knowledge in both directions: from a real thing to a model, and from the model to the real thing, while in scheme, knowledge is transferred in one direction: from a real thing to a scheme. They toy humanoid robot is just a schematic representation of a real human.

Moreover, similar things are not models of each other. Apes are not models of humans and vice versa.


It just depends on your definition of a model, but to me, a neural network is modeled after how a human brain works. If Apes were man-made, I would also count them as a model of a human.


What wrong with my definition of model?

We know how to produce apes.


Well, apes know how to produce apes, we mostly just give them some privacy.


We can clone animals and grow them in a lab.


Calls to mind Isaac Asimov's "shotgun curve".

https://archive.org/details/Fantasy_Science_Fiction_v056n06_...


That story reminds me of this gem: https://pages.cs.wisc.edu/~kovar/hall.html


Similar story: computational biologist, my presentations involved statistics so people would come to me for help, and it often ended in the disappointing news of a null result. I noticed that it always got published anyway at whichever stage of analysis showed "promise." The day I saw someone P-hack their way to the front page of Nature was the day I decided to quit biology.

I still feel that my bio work was far more important than anything I've done since, but over here the work is easier, the wages are much better, and fraud isn't table stakes. Frankly in exchange for those things I'm OK with the work being less important (EDIT: that's not a swipe at software engineering or my niche in it, it's a swipe at a system that is bad at incentives).

Oh, and it turns out that software orgs have exactly the same problem, but they know that the solution is to pay for verification work. Science has to move through a few more stages of grief before it accepts this.


I'm mostly out now, but I would love to return to a more accountable academia. Often in these discussions it's hard to say "we need radical changes to publicly funded research and many PIs should be held accountable for dishonest work" without people hearing "I want to get rid of publicly funded research altogether and destroy the careers of a generation of trainees who were in the wrong place at the wrong time".

Even in my immediate circles, I know many industry scientists who do scientific work beyond the level required by their company, fight to publish it in journals, mentor junior colleagues in a very similar manner to a PhD advisor, and would in every way make excellent professors. There would be a stampede if these people were offered a return to a more accountable academia. Even with lower pay, longer hours, and department duties, MORE than enough highly qualified people would rush in.

A hypothetical transition to this world should be tapered. But even at the limit where academia switched overnight, trainees caught in such a transition could be guaranteed their spots in their program, given direct fellowships to make them independent of their advisor's grants, given the option to switch advisor, and have their graduation requirements relaxed if appropriate.

It's easy to hem and haw about the institutional knowledge and ongoing projects that would invariably be lost in such a transition, even if very carefully executed. But we have to consider the ongoing damage being done when, for example, biogen spends thousands of scientist-years and billions of dollars failing to make an alzheimers drug because the work was dishonest to begin with, or when generations of trainees learn that bending the truth is a little more OK each year.


What's amazing to me is that journals don't require researchers to submit their raw data. At least, as far as I know.

The only option for someone who wants to double check research is to completely replicate a study, which is quite a bit more expensive than double checking the researcher's work.


Journals are incentivized to publish fantastic results. Organizing raw data in a way that the uninitiated can understand presents serious friction in getting results out the door.

The organizations who fund the research are (finally) beginning to require it [0][1], and some journals encourage it, but a massive cultural shift is required and there will be growing pains.

You could also try emailing the corresponding authors. Any good-faith scientist should be happy to share what they have, assuming it's well organized/legible.

[0] https://new.nsf.gov/public-access [1] https://sharing.nih.gov/


> The organizations who fund the research are (finally) beginning to require it [0][1], and some journals encourage it

Feels like wrong way round - organisations paying for journals should be demanding it as proof?


It's becoming more common for journals to have policies which require that raw data be made available. Here's some background: https://en.wikipedia.org/wiki/FAIR_data

One of the purposes of a site on which I work (https://fairsharing.org) is to assist researchers in finding places where they might upload their data (usually to comply with publishers' requirements).


Replicating the results from someone's original data is difficult and time consuming, and other researchers aren't getting paid to do that (they're getting paid to do new research). And of course the (unpaid) reviewers don't have time either.


Re: the role of (gel) images as the key aspect of a publication. To me this is very understandable, as they convey the information in the most succinct way and also constitute the main data & evidence. Faking this is so bold that it seemed unlikely.

The good news IMO: more recent MolBio methods produce data that can be checked more rigorously than a gel image. A recent example where the evidence in form of DNA sequencing data is contested: https://doi.org/10.1128/mbio.01607-23


> don't really tell you very much

???

I think this statement is either meaningless or incorrect. At the very least your conclusion is context dependent.

That being said, I ran gels back in the stone ages when you didn't just buy a stack of pre-made gels that slotted into a tank.

I had to clean my glass plates, make the polyacrylamide solution, clamp the plates together with office binder clips and make sure that the rubber gasket was water tight. So many times, the gasket seal was poor and my polyacrylamide leaked all over the bench top.

I hated running them. But when they worked, they were remarkably informative.


Count me in the club of failed scientists. In my case it was the geosciences, I would spend hours trying to make all my analysis reproducible and statistically sound while many colleagues just published preliminary simulation results obtaining much more attention and even academic jobs. On the flip side, the time spent improving my data processing workflows led to good engineering jobs so the time wasn't entirely wasted


> raise so many... emotions in me... and I now believe [faking gels] is a common occurrence

On the other hand, shysters always project, and this thread is full of cringe vindications about cheating or faking or whatever. As your "emotions" are probably telling you, that kind of generalization does not feel good, when it is pointed at you, so IMO, you can go and bash your colleagues all you want, but odds are the ones who found results did so legitimately.


Regarding "shysters always project": it rings true to me, but given the topic, I'm primed to wonder how you could show that empirically, and if there's any psychology literature to that effect.


As long as it's all peer reviewed!!


I'm assuming /s above.

Because the amount of pencil-whipped "peer review" feedback I've received could fit in a shoe box, because many "reviewers" are looking for the CV credit for their role and not so much the actual effort of reviewing.

And there's no way to call them out on their laziness except maybe to not submit to their publication again and warn others against it too.

And, to defend their lack of review, all they need to say to the editor anyway is: "I didn't see it that way."


I never understood how peer reviewers are supposed to "validate" a paper and how they’re tacitly thought to be doing so by the general public. Authors make claims based on data reviewers don’t have direct access to, from experiments they obviously can’t supervise the execution. They’re forced to accept the claims at face value. In my experience (and I’ve been on both sides), it’s more about overall quality and impact. Journals don’t want badly written papers on unremarkable topics. It’s closer to being a harsh anonymous editor than a real safeguard of “science™.”


I would say peer review is guaranteed to have major problems either with researchers writing what their peers will approve and reviewers being afraid to diverge from the party lines.


Many solutions involving posting data in repositories or audits are being discussed in the comments.

But given that many people are saying that they noticed and quit academia, how about also creating a more direct 'whistleblower' type of system, where complaints (with detailed descriptions of the fraud or a general view on what one sees in terms of loose practices) goes to some research monitoring team which can then come in and verify the problems.


> how about also creating a more direct 'whistleblower' type of system

There needs to first be a system of checks and balances for this to work. The people at the top already know and condone the behavior; who are the whistleblowers reporting to?

"We represent the top scientists in our field; these are a group of grad students. Who are you going to believe?"

And of course they can easily shut anyone down with two words: "science denier"


Gels tell you quite a lot, its what question you are asking that is more relevant to the results being useful over the technique. Of course people lie and cheat in science. Wet lab and dry lab. So many dry lab papers for example are out there where code are supposedly available “by request” and we take the figures on faith.


This is why institutions break down in the long run in any civilization. People like you, people of principle are drown out my agents acting exclusively in their own interest without ethics.

It happens everywhere.

The only solution to this is skin in the game. Without skin in the game the fraudsters fraud, the audience just naively goes along with it, and the institution collapses under the weight of lies.


The iron laws of bureaucracy are:

1) Nothing else matters but the budget

2) Over the long run, the people invested in the bureaucracy always win out over the people invested in the mission/point

Science is just as susceptible to 2) as anything else.


> The only solution to this is skin in the game.

Another solution is the opposite, no skin in the game

Remove cocrete incentives and pay salaries


Salaries without skin in the game leads to failure. People just trade for their own benefit within and outside the system.


I feel this way about every flashy startup with billion dollar valuations.

It seems amazing that they are pulling off what seems impossible.

Years later, we learn they really aren’t. They unjustifiably made a name for themselves by burning VC money instead of running a successful business.


Then hiring the uninteresting gel seems preferable.


> Would I have been more successful

What are you talking about? You _are_ successful. You're not a fraud like all those other tossers.


To me, at the time, successful would have been getting a tenure-track position at a Tier 1 university, discovering something important, and never publishing anything that was intentional fraud (I'm OK with making some level of legitimate errors that could need to be retracted).

Of those three, I certainly didn't achieve #1 or #2, but did achieve #3, mainly because I didn't write very much and obsessed over what was sent to the editor. Merely being a non-fraud is only part of success.

(note: I've changed my definition of success, I now realize that I never ever really wanted to be a tenured professor at a Tier 1 university, because that role is far less fulfilling that I thought it would be).


Most often #1 is sought after as the prerequisite for achieving #2. And due to the structural factors on the number of positions available, funding available, and supply of new PhDs and postdocs, it's most often a really good idea to avoid #1 these days.


PhDs and postdocs aren’t fungible. The ones worth working with go to the same top 20-40 programs in their field. Even finding PhDs that have the bare minimum qualification of “caring about the field” can be tough as there are all sorts of weird incentives pushing people towards PhDs. Applies for most other things in science as well. Number of papers has greatly increased. Number of papers worth reading has not increased nearly as much.


The older I get the more sympathy I have for people who claim they didn't become traditionally successful due to adhering to ethical principles.

I used to think that was just cope. But now I've been in a few situations where there's fairly clear opportunities to profit from shady behavior, and seen colleagues that I formerly respected jump at the chance.

Thanks for not being a fraud.


As another commenter said, thanks for being a scientist and not a fraud. Whatever you think you achieved (which may be more than you think).


That is not enough to most people. And if it is enough for others, then it is probably because they were fortunate enough to fall back on something better.


> That is not enough to most people.

You absolutely nail the most profound pathology of our time. Being a decent, honest, smart, hardworking person is just "not enough" any more. We've created a normatively criminal society.


Was it ever? Ancient texts are full of complaints that people have become immoral, unlike in the Golden Age, which was conveniently long ago.


Of course. We've always had criminals. And as you say, throughout history people have complained about it. All I'm saying is that today people positively celebrate it, and - according to the parent poster - we now demand criminality as a necessity for "success".


Just looking at capitals of many countries, lots of monuments (statues, etc.) are dedicated to conquerors that were basically celebrated for successfully killing, subjugating and stealling from the others.

There are still people that have success without anything criminal.


> Being a decent, honest, smart, hardworking person is just "not enough" any more.

Indeed. People want to be able to own a home and/or raise a family, too.


> own a home and/or raise a family

Agreed. And any system that cannot furnish these human basics to decent, honest, smart, hardworking people neither has nor deserves a future. Any criminality people have ought to be directed at that system rather than ones fellows.


Indeed! You would also been more "successful" selling drugs to teens, or trafficking with human organs. But you did not and that's a good thing.


> At the time (20+ years ago) it didn't occur to me that anybody would intentionally modify images of gels to promote the results they claimed

Fraud I suspect is only tip of the iceberg, worse still is delusion that what is taught is factually correct. A large portion of mainstream knowledge that we call 'science' is incorrect.

While fraudulent claims are relatively easy to detect, claims that are backed up by ignorance/delusion are harder to detect and challenge because often there is collective ignorance.

Quote 1: "Never ascribe to malice that which is adequately explained by incompetence"

Quote 2:"Science is the belief in the ignorance of experts"

Side note: I will not offer to back up my above statements, since these are things that an individual has to learn it on their own, through healthy skepticism, intellectual integrity and inquiry.


> A large portion of mainstream knowledge that we call 'science' is incorrect.

How do you know that? Can you prove it scientifically?

> claims that are backed up by ignorance/delusion

In that case, they are not "backed up"

> I will not offer to back up my above statements

> an individual has to learn it on their own, through ... inquiry

May I "inquire" about your reasoning?


> How do you know that? Can you prove it scientifically?

We know it as a point of meta-analysis. If you go read a 30-year-old biology textbook, a lot of what it says will have been proven false by now. 30 years before that, the textbook says a lot of other wrong things that were disproven 30 years later.

There is no fundamental reason other than hubris to think that the current understanding of the thing is perfect. There is a huge body of evidence that we are inclined to promulgate imperfect understandings of scientific subjects, though. So in the absence of clear, definitive evidence that something is fundamentally different now, the best assumption is to assume that a lot of current "scientific knowledge" is wrong.


owenpalmer, as I stated in my side note I will not be explaining myself, my sincere apologies for this.

You see there are some things that cannot be taught, or it cannot be easily taught more so for adults i.e healthy skepticism and questioning of statements/facts that come from authority figures ( example:science/religion etc). A person will have to make his or her own effort. External influences especially debates from the heretics will generally only delay that progress.

Now, if you're already open to the possibility that mainstream science could be very wrong, I can possibly nudge you in the correct direction. I have explored certain areas of biology (especially nutrition) but not all, each sub area of biology (or any of the sciences really) is vast, I have to rely on heretics who are 'experts' in their areas of specialization.


I love this, do tell the direction to be nudged in.

I wish to experience this new level of understanding.


Nutrition is one place where you can start, because it is possible to do some practical experiments.

But, be warned, you will easily spend a decade investigating it.

I picked the subject, because I'm sure that in the end you will have achieved something of great value: your own relatively better health. (I said relatively, because diet is only one of the factors that influences health, although I say it's an important one)

Expanding on the sibling comment by 'mistercheph':

1) Start on a blank slate. have no notion about the subject. This is easier said than done . Un-learning is way more difficult than learning , even for the simplest of topics.

2) Read really old literature on the subject, the likes of Aristotle, the alternative is to listen to people on the internet who have read old literature. Initially do some cross checking to make sure that these people are indeed saying the truth by actually checking the sources they cite. Progressively read the literature to relatively modern times, say up until 100 years back. Many research papers that are over 100 years old are very readable compared to the current ones even for lay people.

3) Experiment on yourself. Come up with your own observations on what is good and bad.

4) now explore the conventional knowledge on the subject. For x amount of time spend on gleaning conventional knowledge, listen to the (1)heretic who takes the opposite stand for x amount of time who says otherwise.

5) do several iterations of steps 1 to 4.

6) Form your own opinion after a decade.

(1)= the older credentialed heretic is a good bet. By speaking against the guild that he/she is affiliated with he/she has got a lot to lose : his/her livelihood.

Good luck!


> Experiment on yourself

No control group?

> Form your own opinion after a decade

After a decade, differences in your body will mostly be because you are 10 years older.

Such experiments are largely pointless because the only people doing them are people who care about their health. You are more likely to be healthy because of the totality of your life choices, not the specific things you do in the diet experiment.


>No control group?

Nope. Start simple. ( if you can do a full blown experiment with multiple people, by all means go for it)

>After a decade, differences in your body will mostly be because you are 10 years older.

Correct. (did, you think I did not know that?)

>Such experiments are largely pointless

I leave you to decide that, given there are no well controlled experiments in mainstream nutrition, we are left with imperfect choices. (There are small experiments that are conducted by either small groups or individuals, that are pretty high quality IMO).


Let me guess. All of this is to say that you leaned into eating saturated fat, got high cholesterol, and because nothing happened in 10 years, the converging lines of evidence that constantly replicate over a half century are wrong.

Or maybe you started smoking and because there were no RCTs on smoking, then nobody can actually know if it's bad for you, but your N=1 has more epistemic value because you feel okay.

Just getting flashbacks from those hokey "carnivore diet" videos that Youtube keeps wanting me to watch.


lol 'smolder' I saw your comment before you deleted it.


There's two approaches here:

Pick one phenomenon in the world that you observe and don't have an account for, and try to come up with an account, assuming nothing except for your own observation and experimentation, of the causes of the phenomenon. Once you're done, follow the trail, reading only original or translated original documents, of the history of human descriptions of the phenomenon, do the "science" of "science" by observing the phenomena of observing and describing phenomena.

Go in the woods and read Plato and Aristotle and Sophocles for a year.

:D


Sorry but this seems like bit longer version of 'do your own research'.


No quite, It really depend on what you mean by 'research'. For most people 'research' is just consensus of the experts. On the other hand if by research you mean "test it out yourself" I agree with you. ( not always practical though, so you have to choose a middle ground)


Science sent us to the moon. “Do your own research” sent millions to their graves.

“Do your own research” is a movement that is fraught with grifting and basically foundationally just fraud to the core.

“Science” definitely has some fraudsters, but remains the best institution we have in the search for truth.


"Science" didn't send anyone to the moon. The science of getting to the moon was centuries old when it was applied. Engineers, working from alloys to computers, sent people to the moon.


> "Science" didn't send anyone to the moon.

> The science of getting to the moon was centuries old

Not sure I understand, does "Science" lose its capital after a century?


"Scientists dream about doing great things. Engineers do them."


Don't hate the player hate the game. Governments made scientist only survive if they show results and specifically the results they want to see. Otherwise no anymore grants and you are done. Whether the results are fake or true does not matter

"Science" nowadays is mostly BS, while the scientific method (hardly ever used in "science" nowadays) is still gold.


Do hate the player. People are taught ethics for a reason: no set of rules and laws are sufficient to ensure integrity of the system. We rely on personal integrity. This is why we teach it to our children.


When everything is a commodity (nothing runs outside of the market economy), the incentives are skewed to this type of behavior.

'Hate' the player' and 'hate' the game.

Some things shouldn't be part of the market economy - education, health and food.


How did those teachings work out for the fraudsters?


This is exactly what personal integrity is about. You make the right choices specifically and only because they are right. And they are hard choices because you are forgoing immediate gain.

Time favors integrity, and a lack of integrity is usually punished. Sometimes at the individual level, as you age and see the error of your ways. Sometimes at the group level, as you watch your community suffer.

"You reap what you sow."


Their parents taught them different ethics.


Most of them become distinguished in academia, and only a few get punished if they are too blatant or piss off too many people (see recent ivys losing their presidents over academic fraud).


If people can get away with it, they will do it. Rules and punishments for breaking them exist for a reason.


> If people can get away with it, they will do it.

This is not universally true and individuals and societies don't have to be organized this way.

Why are streets in some countries filled with trash when others are clean? My community does not have anyone policing littering - yet our streets, parks and public areas are litter free.


You are kind of begging the question - Do all members of your community take mandatory ethics classes?


> If people can get away with it, they will do it.

This isn't true of everyone, but assuming it is increases the likelihood that it will become so. Because if everyone is trying to get away with it, why shouldn't I? That sort of breakdown in trust is high up on my list of worrying societal failure modes.


If they can get away with it, some people will do it.


My sole rational argument for Christianity is that it has made the societies that it is, or was, infused with, more honest than the ones that were not.


I’m not sure there are any rational arguments for Christianity. I say that as a practicing Christian. Either it meets a spiritual need in you, or it’s not very valuable. I imagine that belief in a God who punishes evildoers has kept some people honest throughout history, but the value of that is surely outweighed by the evil done in the name of that God.

I also don’t believe Christian societies are more honest than others. Every religion I know of teaches honesty, as does every non-religious ethical framework I can think of.


I unfortunately cannot track down the prior research that I did on this in the time I have available as a new father but I am as skeptical as they come and I seemed to have found some solid data suggesting that nations with Christian values are simply broadly more honest and trustworthy. This includes countries such as Iceland because even though it is technically atheist, there's inertia there still from the influence of Christianity on its culture.


Congratulations! I hope you're getting some sleep. If you do ever happen across that data I'd be interested.


If I read pagan Roman observations about life and people, they strike me as way, way more honest (sometimes brutally honest) than anything that we are used to for the last 1000 years, perhaps with exceptions like Machiavelli and some verbal jokes of the "unprintable" character.

In Christian theory, everyone is a sinner, but in a real Christian society, people try to cover up their particular sins all the time, at least against other laypeople (not the priest), which leads to the opposite of honesty - hypocrisy.



That sounds like it is loaded with a lot of "no true Scotsman" caveats including, perhaps, Scotland, which has crime like any other country.



People doing bad things, including Christians, is completely in line with the Christian teachings of original sin, the fall and concupiscence. It is human nature to do bad things and it is very difficult to over come this behavior.


We should really do some more honest crusades to export our honest values to the non believers to make the world a better, more honest place.


https://news.ycombinator.com/item?id=41684157

also the crusades were in significant part a competitive reaction to Islamic "proselytization via violence" so you're placing the blame on the wrong religion, there


A true scientist never says, "trust me" or even worse, "trust the science."

https://www.youtube.com/watch?v=gnPFL0Dr34c


Reminds me of this tweet that calls out the problem of the popular position "science is real"

"Science isn’t real - that’s terrible epistemology. It’s a process or method to generate and verify hypotheses and provisional knowledge, using replicable experiments and measurements. We don’t really know the real - we just have some current non-falsified theories and explanations that fit data decently, till we get better ones. The “science is real” crowd generally haven’t done much science and take it on faith."

[1] https://x.com/rao_hacker_one/status/1811295722760982939


It's probably better to say that engineering based on science is usually real. Engineering cares a lot less about falsifying theories and more on what existing theories seem to have general predictive value, and can be used to do stuff, including things where human lives are at risk (tall buildings, fire reduction materials, airplanes). And if there are failures out in the field, they're inspected, and those results are fed back to update both the practices, and the theories.

Personally I believe we live in an objective universe that can be understood by human brains (possibly using AI augmentation) and that our currently most advanced experimentally verified theories correspond to some actual true aspect of our universe. In that sense, science is real when the current theories match those aspects well enough to make generalizable predictions (general relativity and quantum mechanics).


I think you've hit on an important point. Science isn't about finding absolute truth, but rather about generating testable hypotheses that can be validated through experimentation and observation. This is why it's so crucial for scientists to follow the scientific method - they need to be willing to revise their theories based on new evidence. Your comparison between science and engineering is a good one. Engineering is often more focused on practical application, whereas science is more about understanding the underlying mechanisms that govern our world.


The intent of science is to find absolute truth. It's just that the mechanism by which we do so typically involves demonstrating that a finding isn't absolutely true. And we also lack the epistemological confidence to say that what we're observing represents the absolute truth, or that the idea of absolute truth is meaningful.


You have agency. Yes - the system provides incentives. However, you are not some pass-through nothingness to just accept any incentives. You can chose to not accept the incentives. You can leave the system. You're lucky - it's not a totalitarian system. There will be another area of life and work where the incentives align with your personal morals.

Once you bend your spine and kneel to bad incentives - you can never walk completely upright again. You may think and convince yourself that you can stay in the system with bad incentives, play the game, but still somehow you the player remain platonically unaffected. This is a delusion, and at some level you know it too.

Who knows? If everyone left the system with bad incentives, it maybe that the bad system collapses even. It's a problem of collective action. The chances are against a collapse, that it will continue to go on for some time. So don't count on collapse. And even if one was to happen in your time, it will be scorched earth post collapse for some time. Think as an individual - it's best to leave if you possibly can.


> Don't hate the player hate the game.

When the game is designed by the most succesful players you absolutely should hate the players for creating a shitty game.


You are clearly deeply disconnected from the actual practice of research.

The best you can really say is that the statistics chops of most researchers is lacking and that someone researching say caterpillars is likely to not really understand the maths behind the tests they're performing. It's not an ideal solution by any means but universities are starting to hire stats and cs department grads to handle that part.


"Nobody is ever responsible for their own actions. Economics predicting the existence of bad actors makes them not actually bad."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: