Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The quality of these results is in my experience quite poor, so this is worrying to me. (The other comment about "distilled blogspam" hits the nail on the head I think.)

It highlights a fundamental tension around Google's core product: Users want accurate information. From their perspective, that is the singular purpose of using Google. But Google's internal purpose is only to sell ads. It makes no difference to their bottom line whether the information they give users is correct unless it gets so bad they start losing ad impressions.

Speaking even more broadly, it's very depressing to me how secondary the goal of building a good, useful or functional thing is subservient or at best orthogonal to the goal of making money.



I disagree, I think (especially as evidenced by Gemini) Google's people has a specific mission to make the world a better place, and "better" being defined by them such as ensuring that DEI is injected into everything. I am generally very supportive of diversity, but I think it's pretty clear that the goals of helping you find information and the goals of filtering/shaping that information to make the world more reflective of what they think it should be, are fundamentally in tension. That is what concerns me the most.


I do not believe anyone with real power at Google values liberal values above revenue. The RLHF and fine-tuning is solely to prevent news articles of the form "Google unveils new shockingly racist AI!!", which is absolutely what would happen without explicit fine-tuning, and which would invite regulation and scrutiny.

I too think artificially "fixing" models is the wrong way to go. If the models are biased towards racism it's because the training data is biased towards racism which is because society is biased towards racism. Which is true, and we'd be better served by acknowledging that and shining a light on it. Just ban AI (as a known tainted product) from being used for making any decisions of importance.


> The RLHF and fine-tuning is solely to prevent news articles of the form "Google unveils new shockingly racist AI!!", which is absolutely what would happen without explicit fine-tuning, and which would invite regulation and scrutiny.

Yes that is undoubtedly true, and is a great point. I'm not sure whether it just so happened that the revenue incentives lined up with the liberal values well enough that nobody ever questioned or pushed back, or if the revenue goals outweighed the liberal values, but my guess is it's probably more the former. Though once revenue and liberal values are in tension, it will be interesting to see which direction they go. My guess is it will be a mixture that leaves no clear trump card, and makes it very difficult to predict given situations.


> Just ban AI (as a known tainted product) from being used for making any decisions of importance.

But Google (and others) want to sell that product.


Separately: we should be careful of carrying the right wing's water and adopting terms like "DEI" as negatives without thinking critically about it.

This is straight out of the Chris Rufo playbook; identify a well-intentioned but possibly flawed concept, create a caricature of it, and make that strawman the punching bag of every anti-inclusive political voice. Then, because it's a term liberals were already using, turn around and use it to attack existing institutions.

This is his explicit strategy for kneecapping "CRT" (formerly an academic subfield), "woke" (a social concept among American Blacks), and you can see it unfolding in real time against "DEI" (formerly the way HR departments tried to comply with the Civil Rights act, but now a catch-all negative term.)


Yes thanks, that's a good thing to keep in mind. I actually didn't intend it in a negative way when I used it, but probably more people perceive it that way than not so it's good to keep in mind. You're right, there are lots of opportunists and people with agendas going the opposite way that will absolutely try to get us to throw out the baby with the bathwater. It also doesn't help that we humans tend to be pendulum swingers (and also have a tendency to over-correct) which then leads to backlash and backtracking, and it's not hard to hijack those natural motions to get people to react emotionally.

Reading commentary from 18th century religious leaders for example after Franklin invented the lightning rod, is quite illuminating. It's far enough in the past that there aren't really (serious at least) people making the case that we are tampering with God's methods for punishing the wicked anymore, so there isn't a personal/emotional connectino to the arguments for most people. Seeing people of the day seize on parts of the science that were slightly wrong and using it to enflame the passions of people to wholly abandon the Lightning Rods (including some cases where people actually mobbed and tore them off of buildings) is very much in my mind.


Best monthly purchase I make now is Kagi.

Replaced our Netflix membership with it.

Recommend so you don't need to be worried by what Google does with your search.


Yup - I do not use Kagi but Kagi's incentive is to make the best search engine. Google's incentive to make the best ad impressions engine.


RE: "Speaking even more broadly, it's very depressing to me how secondary the goal of building a good, useful or functional thing is subservient or at best orthogonal to the goal of making money."

This is what happens when things are "free". They have to make money somehow and "free" products typically have bad incentives (i.e. the company puts making money ahead of making the best product). One of the nice things about paying for news, TV (not cable, streaming), software, etc. is you are the customer and they company's continued survival depends on making you happy. In the short term, they can can do all sorts of crappy things to their customers (think Cable TV's high prices and meh content), but in the long term, bad behavior kills companies.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: