Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why would we assume an LLM, even one that doesn't appear to have a bias like that built in, doesn't have one? Just because we can't identify it immediately, does not mean it doesn't exist.

Groups of people can and do have bias, but I also think it's much harder to control the outcome (for better or worse) when inputs are more diverse.

 help



There very likely is existing research into evaluating political bias in LLMs, not too sure, but I do think it's very possible to have an evaluation framework that could test LLMs for political bias and other biases. Once we have such a test and an LLM that passes it, we can be certain (to some confidence, for some topics, for some biases, etc etc) that the LLM won't be biased.

For humans, there is no such guarantee. The humans can lie, change their mind, etc. See Wikipedia, where they talk about how they are not biased, they have many processes that ensure no biases, blah blah blah, and it turns out they are massively biased, what a surprise.

Of course, who evaluates the evaluators/evaluation frameworks comes into play but that's a much easier problem.


> See Wikipedia, where they talk about how they are not biased, they have many processes that ensure no biases, blah blah blah, and it turns out they are massively biased, what a surprise.

It's clear you have some unfounded issue with Wikipedia. They are not "massively biased", that's a talking point propelled primarily by the right/far right because of a desire to rewrite history to match their ideological needs.

Saying "there very likely is existing research into evaluating political bias in LLMs" essentially means very little because

1. By your own admission you can't even say for sure that such research is actually happening (it probably is, but you admit you don't actually know) 2. There is no guarantee such research will lead to anywhere anytime soon 3. Even if it does, how does a means of evaluating bias in LLMs provide a path to eliminating it?


It’s not “unfounded”. Wikipedia is biased and saying that’s “propaganda” or a result of propaganda is a nonsense non-argument.

> Saying "there very likely […]

What’s with this nitpicky stuff. A simple google search shows there’s tons of research in LLM political bias evaluation.

> There is no guarantee [..] path to eliminating it?

It’s research. Sure there’s no guarantee but given progress in LLM, I would be optimistic rather than pessimistic.


> It’s not “unfounded”. Wikipedia is biased and saying that’s “propaganda” or a result of propaganda is a nonsense non-argument.

It specifically is unfounded if you have no credible sources to back it up. "Trust me bro" doesn't qualify.

> What’s with this nitpicky stuff

This is HN, you should be prepared to validate what you're saying, or accept you'll be challenged to do so.

> It’s research. Sure there’s no guarantee but given progress in LLM, I would be optimistic rather than pessimistic.

This is a really poor argument when advocating it (AI) as a viable replacement for the status quo.


There has been lots of discussion about wikipedia’s bias in HN and elsewhere for years and I’m not going to rehash all of that.

> […] AI) as a viable replacement for the status quo.

Given that the status quo is clearly biased and structurally unwilling to be unbiased due to existing political affiliation, even an AI that is not evaluated all that well will be better. It can only get better from this status quo, so it’s a fine argument.


Discussion doesn't constitute consensus or conclusion - as I said several comments up, widespread bias in Wikipedia is a talking point propagated by those with an agenda to distort factual accuracy - people like Musk have hardly been subtle about this being their objective.

> even an AI that is not evaluated all that well will be better

This is just intellectual laziness. If you don't like Wikipedia that's fine, but if you're going to make the effort of characterising it as such on a public forum, the least you can do is make an effort to that point. This certainly isn't a "fine" argument at all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: