Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We buried the post for seeming obviously-LLM-generated. But please email us about these (hn@ycombinator.com) rather than posting public accusations.

There are two reasons why emailing us is better:

First, the negative consequences of a false allegation outweigh the benefits of a valid accusation.

Second and more important: we'll likely see an email sooner than we'll see a comment, so we can nip it in the bud quickly, rather leaving it sitting on the front page for hours.





This thread is a great discussion and I have kept coming back to it over the last couple days to read more of it when I have a chance. I’m kind of disappointed that it artificially ended. I think at some useful comment threshold level you just have to let it go.

Not sure if I should mail this question but, is there any chance those 400+ votes are artificially inflated?

There’s no evidence of this, but a title that’s easy to agree with can often attract upvotes from people who don’t read the article.

Hey Tom! Earnest question here - I am seeing on the order of one AI post a day on HN, sometimes more than that. It's good to know we can email in about these things, but I think most users don't understand that - certainly I didn't for the last few month as this has been going on. It would be nice if there was an affordance on the site to flag these, similar to the existing flag function.

Thanks!


Just flagging them is fine. Emailing us with the link is even better. Part of me wonders if we should have a new, specific-purpose flag for generated content, but it’s not the HN way to add new features for actions that can already be satisfied by existing UI features.

I would suggest a new feature here. I have been hesitating on flagging them because I feel 'flag' is for things which are obvious rule violations. I feel a bit bad flagging a submission off the front page when I only have a vague feeling that it was AI-written. (And sometimes AI generated content isn't that bad, i.e. if an author just used AI to translate their writing into English.)

You buried a popular post because of the public accusation or just your "hunch"?

Why not let your audience decide what it wants to read?

I say this as a long time HN reader, who feels like the community has become grumpier over the years. Which I feel like is a shame. But maybe that's just me.


You're welcome to email us about this.

It's my job to read HN posts and comments all day, every day, and these days that means spending a lot of time evaluating whether a post seems LLM-generated. In this case the post seems LLM-generated or heavily LLM-edited.

We have been asking the community not to publicly call out posts for being LLM-generated, for the reasons I explained in the latest edit of the comment you replied to. But if we're going to ask the community that, we also need to ask submitters to not post obviously-LLM-influenced articles. We've been asking that ever since LLMs became commonplace.

> I say this as a long time HN reader, who feels like the community has become grumpier over the years. Which I feel like is a shame. But maybe that's just me.

We've recently added this line to the guidelines: Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative.

HN has become grumpier, and we don't like that. But a lot of it is in reaction to the HN audience being disappointed at a lot of what modern tech companies are serving up, both in terms of products and content, and it doesn't work for us to tell them they're wrong to feel that way. We can try, but we can't force anyone to feel differently. It's just as much up to product creators and content creators to keep working to raise the standards of what they offer the audience.


Thanks Tom, I appreciate the openness. You are seemingly overriding the wishes of the community, but it your community and you have the right to do so. I still think it's a shame, but that's my problem.

> You are seemingly overriding the wishes of the community

That's false. The overwhelming sentiment of the community is that HN should be free of LLM-generated content or content that has obvious AI fingerprints. Sometimes people don't immediately realize that an article or comment has a heavy LLM influence, but once they realize it does, they expect us to act (this is especially true if they didn't realize it initially, as they feel deceived). This is clear from the comments and emails we get about this topic.

If you can publish a new version of the post that is human-authored, we'd happily re-up it.


>> You are seemingly overriding the wishes of the community

> That's false. The overwhelming sentiment of the community is that HN should be free of LLM-generated content or content that has obvious AI fingerprints.

Yeah it is indeed, and for good reason: why would I spend time reading something the author didn't spend time thinking through and writing?

It's not that people don't like Postgres articles (otherwise, the upvotes would be much lower), but once you read a bit of the article, the LLM stench it gives off is characteristic. You know: Standard. LLM. Style. It's tiresome. Irksome. Off-putting.

What I'm wondering is, if LLMs are trained on "our" (in the wider sense of the word) writing style, and spew it back at us, what data set was it that overused this superficial emphatic style to such a degree, that it's now overwhelmingly the bog-standard generative output style?


Likely a lot of medium posts? That's my theory anyway.

I'm just sharing my thoughts as a long-time reader. Again, it's your show. You don't have to defend your actions. Thanks for all that you do.

I’d be grumpy over wasting my time on an HN post that’s LLM generated which doesn’t state that it is. If I wanted this, I could be prompting N number of chat models available to me instead of meandering over here.

There are also 200+ comments on here and a good discussion IMO which is now unfortunately buried.

Feels like a net negative for the HN community.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: