what do you think it should be called? for ~$15 optional one time bucks people get 24/7 access to the entire engineering team, me, prioritized feature dev, and can earn monthly payouts from our Creator Fund. we don’t charge a subscription, obviously i’m partial but this model has serves us well. and most of the developer edition fee goes right back to the community via bounties, perks, etc.
Developer Access or something like that, and put part of your comment in there. I would remove the word Unlock entirely, it implies you’re locking code, as opposed to providing a service.
If it’s not trained to be biased towards Elon Musk is always right or whatever, I think it will be much less of a problem than humans.
Humans are VERY political creatures. A hint that their side thinks X is true and humans will reorganize their entire philosophy and worldview retroactively to rationalize X.
LLMs don’t have such instincts and can potentially be instructed to present or evaluate the primary, if opposing, arguments. So you architecturally predisposed argument, I don’t think is true.
> LLMs don’t have such instincts and can potentially be instructed to present or evaluate the primary, if opposing, arguments.
It seems essentially wrong to anthropomorphize LLMs as having instincts or not. What they have is training, and there's currently no widely accepted test for determining whether a "fair" evaluation from an LLM stems from biases during training.
(It should be clear that humans don't need to be unpolitical; what they need to be is accountable. Wikipedia appears to be at least passably competent at making its human editors accountable to each other.)
I said LLM doesn’t have such instinct but yeah I agree there should be less anthropomorphizing and more evaluation based framing when talking about LLMs, but it’s not that easy in regular discussions.
About Wikipedia, there is obvious bias and cliques there as has been discussed in this thread and HN for many years, not to mention the its bias is reason that Grokipedia came about in the first place.
> bias is reason that Grokipedia came about in the first place.
You are correct, but only in the sense that Musk was unable to impose his own biases upon Wikipedia, so he had to make one where he can tune bias to whatever is convenient at the moment.
Why would we assume an LLM, even one that doesn't appear to have a bias like that built in, doesn't have one? Just because we can't identify it immediately, does not mean it doesn't exist.
Groups of people can and do have bias, but I also think it's much harder to control the outcome (for better or worse) when inputs are more diverse.
There very likely is existing research into evaluating political bias in LLMs, not too sure, but I do think it's very possible to have an evaluation framework that could test LLMs for political bias and other biases. Once we have such a test and an LLM that passes it, we can be certain (to some confidence, for some topics, for some biases, etc etc) that the LLM won't be biased.
For humans, there is no such guarantee. The humans can lie, change their mind, etc. See Wikipedia, where they talk about how they are not biased, they have many processes that ensure no biases, blah blah blah, and it turns out they are massively biased, what a surprise.
Of course, who evaluates the evaluators/evaluation frameworks comes into play but that's a much easier problem.
> See Wikipedia, where they talk about how they are not biased, they have many processes that ensure no biases, blah blah blah, and it turns out they are massively biased, what a surprise.
It's clear you have some unfounded issue with Wikipedia. They are not "massively biased", that's a talking point propelled primarily by the right/far right because of a desire to rewrite history to match their ideological needs.
Saying "there very likely is existing research into evaluating political bias in LLMs" essentially means very little because
1. By your own admission you can't even say for sure that such research is actually happening (it probably is, but you admit you don't actually know)
2. There is no guarantee such research will lead to anywhere anytime soon
3. Even if it does, how does a means of evaluating bias in LLMs provide a path to eliminating it?
There has been lots of discussion about wikipedia’s bias in HN and elsewhere for years and I’m not going to rehash all of that.
> […] AI) as a viable replacement for the status quo.
Given that the status quo is clearly biased and structurally unwilling to be unbiased due to existing political affiliation, even an AI that is not evaluated all that well will be better. It can only get better from this status quo, so it’s a fine argument.
Discussion doesn't constitute consensus or conclusion - as I said several comments up, widespread bias in Wikipedia is a talking point propagated by those with an agenda to distort factual accuracy - people like Musk have hardly been subtle about this being their objective.
> even an AI that is not evaluated all that well will be better
This is just intellectual laziness. If you don't like Wikipedia that's fine, but if you're going to make the effort of characterising it as such on a public forum, the least you can do is make an effort to that point. This certainly isn't a "fine" argument at all.
Obviously meme formats from when I was younger (images and text) are fine, but meme formats that are newer (video and text) and brainrot. Or maybe it's just the same thing every generation does where they think the generations before them were hopelessly out of touch but the kids nowadays have no taste...
My impression is that it's a lack of remixing. I don't think recreating the exact same joke with different people in the video is particularly novel. It seems less like meme/remix culture and more like how you find a slightly different version of the same item (or literally a repackaged item from the same factory) for sale on Amazon from fifty different "brands" that have random ass names.
The meme could be good. The mixes could be good. But...is that what is actually happening? Or is someone hoping to create their own version that gets view in competition with the original so they can squeeze out some monetization from a trend and hoping the algorithm lotto smiles upon them?
I'm not convinced this is specific to the format (or the platform). Whenever I try to search for a specific meme or gif on google, I find huge numbers of basically identical copies that come from separate sources. I've seen complaints on humor subreddits about how people repeatedly post copies of the same jokes, often without attribution.
Out of curiosity, I asked my wife about this trend specifically, and while she was familiar with the joke, she has yet to see any instance of it on her page. I have to wonder if people who are experiencing stuff like this are mostly just getting stuck in a bubble and not pushing through to other content. There's an argument that learning how to interact with the app to make the algorithm work for you isn't a great experience, but there's a large volume of people who use and enjoy the app without complaining about this issue. I'm not particularly convinced that all of these people have gone numb to brainrot to the point that they enjoy seeing the same joke 20 times in a row compared to them just having a better experience from seeing a wider variety of content.
I liked seeing the same meme because it was fun seeing the same thing be done by different people. Not everyone likes that type of novelty I guess.
> complaints on humor subreddits about how people repeatedly post copies of the same jokes, often without attribution
This feels like a reflection of what the person feels posting on the internet signifies. Are you publishing something, and thus you should attribute sources etc, or are you just having a conversation?
You would never attribute sources when making a joke in real life. I guess you could but it would be a pretty dorky thing to do.
Good points. This basically circles back to my parent comment; it seems like it's just a matter of personal taste, and there's nothing inherently more "brainrot"-y about this format than any others.
> Or is someone hoping to create their own version that gets view in competition with the original so they can squeeze out some monetization from a trend and hoping the algorithm lotto smiles upon them?
Exactlym that's the feeling I get with it.
I noticed a lot of "creators" are constantly repeating the same skit over and over and over too. With different backgrounds etc. Clearly a way to try and get noticed by the algorithm. But also a great way to get them blocked by me of course.
I used TikTok and also never came across a meme like that. Or maybe I did once or twice, I just quickly swiped away (or if something I’m not interested in is shown repeatedly I click not interested and it’s gone at least for a long time). If you’re shown the same meme from 20 different people chances are you just kept watching them, maybe with disapproval, but your device can’t read your brainwaves yet so the service just thinks you’re super interested.
And YouTube also had those stupid challenges with everyone doing the same stupid shit before TikTok even existed.
>And YouTube also had those stupid challenges with everyone doing the same stupid shit before TikTok even existed.
And before the transistor, we had flagpole sitters[0] and dance marathons[1] and dozens of other memes, just in the 20th century.
This kind of thing is nothing new, and has been going on for as long we've been us. Now this is accessible to a larger and more varied audience, not just those who are nearby.
It’s a culture thing I guess. Overlay videos of other videos and the memeing videos has been in TikTok since the beginning. Youtube would probably ban the former under a copyright strike or something.
Most memes and most application of memes were not that funny. Scrolling reddit 10 years ago is not that different from TikTok just with pictures instead of videos.
Eh. They really weren't. "I'm firin' mah lazer" wasn't funny and yet for a while it was ubiquitous. I'd wager in fact that most memes weren't inherently funny: their purpose is in-group signalling for the most part.
That escalated quickly. No I meant that owning a Tesla, like Apple or Prada is a status symbol. Income status. So if maintainance costs a lot it will reinforce that.
by no mean calling this out is advocating for status signaling , I myself would never buy a Tesla for this very reason.
Specifically about custom CUDA kernels, I’ve implemented them with AI that significantly sped up the code in this project I worked on. Didn’t know how to code these kernels at all, but I implemented and tested a couple of variations and got it running fast in just two days. Basically impossible for me before AI coding (well not impossible but it would have taken me many weeks, so I wouldn’t have tried it).
It depends on the experiment done… You need every intermediate point between the wires to be low distortion too. As in, audiophiles cannot distinguish between distortion and distortion+distortion is not really an interesting result.
You need source, digital to analog conversion, pre-amp, amp, speakers to have low distortion too, and you need the room to be appropriately treated too. I didn’t look at whether they did all that but I seriously doubt they did.
Did you really mean to say that audiophiles can distinguish between no distortion and some distortion, but cannot distinguish between more distortion and less distortion?
reply