> You are seemingly overriding the wishes of the community
That's false. The overwhelming sentiment of the community is that HN should be free of LLM-generated content or content that has obvious AI fingerprints. Sometimes people don't immediately realize that an article or comment has a heavy LLM influence, but once they realize it does, they expect us to act (this is especially true if they didn't realize it initially, as they feel deceived). This is clear from the comments and emails we get about this topic.
If you can publish a new version of the post that is human-authored, we'd happily re-up it.
>> You are seemingly overriding the wishes of the community
> That's false. The overwhelming sentiment of the community is that HN should be free of LLM-generated content or content that has obvious AI fingerprints.
Yeah it is indeed, and for good reason: why would I spend time reading something the author didn't spend time thinking through and writing?
It's not that people don't like Postgres articles (otherwise, the upvotes would be much lower), but once you read a bit of the article, the LLM stench it gives off is characteristic. You know: Standard. LLM. Style. It's tiresome. Irksome. Off-putting.
What I'm wondering is, if LLMs are trained on "our" (in the wider sense of the word) writing style, and spew it back at us, what data set was it that overused this superficial emphatic style to such a degree, that it's now overwhelmingly the bog-standard generative output style?
That's false. The overwhelming sentiment of the community is that HN should be free of LLM-generated content or content that has obvious AI fingerprints. Sometimes people don't immediately realize that an article or comment has a heavy LLM influence, but once they realize it does, they expect us to act (this is especially true if they didn't realize it initially, as they feel deceived). This is clear from the comments and emails we get about this topic.
If you can publish a new version of the post that is human-authored, we'd happily re-up it.