I think the biggest problem with chatbots is the constant effort to anthropomorphize them. Even seasoned software developers who know better fall into acting like they are interacting with a human.
But a llm is not a human, and I think OpenAI and all the others should make it clear that you are NOT talking to a human. Repeatedly.
I think if society were trained to treat AI as NOT human, things would be better.
I'm not a full-time coder, it's maybe 25% of my job. And am not one of those people that have conversations with LLMs. But I gotta say I actually like the occasional banter, it makes coding fun again for me. Like sometimes after Claude or whatever fixes a frustrating bug that took ages to figure out, I'll be like "You son of a bitch you fixed it! Ok, now do..."
I've been learning a hell of a lot from LLMs, and am doing way more coding these days for fun, even if they are doing most of the heavy lifting.
That chatbot you're interacting with is not your friend. I take it as a fact (assumption? axiom?) that it can never be your friend. A friend is a human - animals, in some sense, can be friends - who has your best interests at heart. But in fact, that chatbot "is" a megacorp whose interests certainly aren't your interests - often, their interests are at odds with your interests.
Google works hard with branding and marketing to make people feel good about using their products. But, at the end of the day, it's reasonably easy to recognize that when you use their products, you are interacting with a megacorp.
Chatbots blur that line, and there is a huge incentive for the megacorps to make me feel like I'm interacting with a safe, trusted "friend" or even mentor. But... I'm not. In the end, it will always be me interacting with Microsoft or OpenAI or Google or whoever.
There are laws, and then there is culture. The laws for AI and surveillance capitalism need to be in place, and we need lawmakers who are informed and who are advocates for the regular people who need to be protected. But we also need to shift culture around technology use. Just like social customs have come in that put guard rails around smartphone usage, we need to establish social customs around AI.
AI is a super helpful tool, but it should never be treated as a human friend. It might trick us into thinking that its a friend, but it can never be or become a friend.
But why not? If we look past the trappings of "we hate corporations", why not treat it as a friend? Let's say you acquire a free-trade organic GPU and run an ethically trained LLM on it. Why is an expensive funny-shaped rock not allowed to become a friend when a stuffed animal can?
The stuffed animal has only a sentimental value and it will not have a statistics based and geopolitically biased opinion that it shares with you and influences your decisions.
If you want to see how bad a chatbot can be as a friend, see the recent case when it has driven a poor mentally vulnerable minor to suicide
The fact that both offer a subscription is not ironic - it simply highlights the central point. In the end, those paying a subscription for thebaffler.com or for salesforce.com are paying for something very similar: they are paying for words and ideas.
People think that building a software program is like building a house. And, I grant, in some ways it is. But in the end, the customer is not paying for anything tangible. He's paying for words and ideas.
That’s interesting. The thing is, the Baffler continues to create new words and ideas, and I pay for those new words and ideas every month if I subscribe.
No new words or ideas show up next month on your software, so why should I keep paying you? I know you’ll say maintenance and updates, but from the user’s perspective there is quite the difference.
I got one of those too, and I got it because I didn’t want to worry about this kind of garbage from HP. It’s also a bonus that I haven’t had to refill the ink in a LONG time.
But oh my. The quality of the printing is terrible.
That's the first stop, and the second is to make sure you run it at or near 20 degrees Celsius to get the ink to flow properly otherwise quality will suffer.
My daughter has a rare genetic disorder, cardiofaciocutaneous syndrome (CFC), and it was not until 2 years in that we were able to get a diagnosis after a whole gene sequence. It was game-changing to get the diagnosis, because it connected a to a community and to doctors who could help.
I'm not sure we can have it both ways. In other words, I'm growing in my conviction that freedom and privacy on the web require increased balkanization.
Neil Postman (of whom I'm a big fan, to put my cards on the table) wrote very helpfully about the concept of "filters" (that may not be his word) in Technopoly, and I've been thinking about the concept a lot ever since. Certain things are inadmissable in court, and that is true because otherwise you'll just have bedlam. The analogy doesn't hold perfectly, but the same thing is true of other situations where people gather together to communicate, whether in person or online.
The other thing that's required to avoid total chaos is authority, and I think it's a common trait to be dismissive of the value and purpose of authority in this context. People have been dismissive about authority since, uh, forever, but I think there's something about the promise of the internet that makes people think we have finally arrived at the point where it is unnecessary. (Think blockchains and crypto, for instance.) But it's not.
The sooner we see that authority isn't a bug but a feature and begin to build for it in our tech tools, the better.
We’ll need to decouple “authority” from “income” in many venues, and that’s pretty hard.
A lot of people agree that moderation helps communities. That’s “authority”. But when moderators are employed by capital holders, the authority leaks up to the capital holders and you end up with different and thornier problems than chaos.
Fedicerse-like systems probably do better in this regard, but the community has to be convinced into paying for the moderator’s time and the system’s resources. Most people have had rhat trained out of them by free-to-use services.
I can imagine blockchain tech could support improvements in this situation, but it is not itself The Solution. People need to own and manage the solution, and to choose the appropriate tech to support their choices. Blockchain people need to stop trying to sell us on the tech being the solution.
Yeah, I'm generally of this conviction as well. Particularly looking at the Tildeverse (basically multi-user shell environments that usually loosely network with each other), there are definitely sysadmins in that environment but there are a couple things they do better than, say, Facebook:
1. They personally vet the people they let in to maintain the style of community they're going for.
2. If someone's a problem, it falls to the sysadmin or the people they chose as moderators to kick the person.
3. Anyone can start a tilde and network with the others, but this is done at a human level, meaning people need to actually -want- to network with you.
4. The size of the communities are kept human-scale. It happens that a tilde "fills up" and that people who want in just need to start their own, which is how the whole "tildeverse" arose: from tilde.club reaching capacity and enough other people liking the idea they decided to copy it.
I'm a big believer in this sort of federation approach to the internet in general. The clue is in the name: the "internet" is a network of networks, and IMO we shouldn't expose everything to the rest of the world, but only carefully curated gateways behind which the trusted network with admins who the curated community believes in can operate with less fear of bad actors (at some level we all know this already because this is how Instagram users behave, just at a social level instead of a technical level).
For a very long time human beings talked to each other in groups that required no authority, only consensus; We've had a very hard time translating this online, karma, following, friending, subs and rules.. none of it is perfect. But adding an authority to dictate what is good discussion what is bad, comes with a host of problems in itself.
But i do agree that a benevolent dictator may be better than a functioning democracy, for a time.
No there is consensus, and it is real. But the hierarchies within that group (based on sex, ability, wealth, knowledge, etc) are also there and are also in play.
This is your big mistake: there is no “adding” authority. Authority is always there. I have started to think of it like the conservation of energy: it changes forms, but you can never get rid of it.
Hard disagree about authority being baked into platforms. Authority should be granted by the user using a social trust graph, with defaults in place for users who aren't savvy enough to manage their own trust.
Lightpaper looks like a fantastic markdown editor! But the landing page makes me feel like it isn't getting much love from the developers. The copyright notice at the bottom, for instance, says 2019.
Keep in mind that every culture has blasphemy laws. What you're seeing here is not the removal of blasphemy laws, but the culture's switch from one god to another.
The attempt to "cleanse the land" of what they have newly labeled as "bigotry", "racism", etc. does have what strongly resembles religious fervor, a clear definition of sin, and a clear promise of a better world and the conditions for getting there.
And there are always those pesky skeptics who make it so difficult... if only everyone would just believe.
That's a bit flippant. Of course every culture has a prevailing moral code, but blasphemy was a tool developed to prevent the rise of religious institutions that threatened the power of the Catholic church with, among other things, brutal capital punishment. After a few hundred years this basis was no longer relevant, and by the 1900s it was just used to occasionally pearl clutch in the public eye, long after the (new) church genuinely felt threatened.
However ham-fisted this new law is, attempts to prevent:
> prejudice on the basis of age, disability, race, religion, sexual orientation, transgender identity or variations in sex characteristics (sometimes described as "intersex" physical or biological characteristics).
, that's to say things which you cannot choose (except religion), is a pretty major change from protecting the official state religion du jour - not just a straight swap.
> things which you cannot choose (except religion), is a pretty major change from protecting the official state religion du jour
It's a big change from protecting the official state religion du jour to protecting the official state morality do jour?
It isn't even about prejudice which will already be illegal for discrimination in hiring, etc. It's simply about saying things which might make other people feel "bad" feelings. Pretty similar to offending a religious person by saying their God is wrong. That stirs up hate too and it deeply hurts people.
> It's a big change from protecting the official state religion du jour to protecting the official state morality do jour?
Yes and I think I already explained how I believe that to be the case.
But to expand it to your example, how does an attempt to improve prejudiced actions against all people of faith, sexual orientation, etc. support a state institution? What is the ulterior motive and which power structure is benefitting from it? From what I can see - none. It's just a change in society's morality.
> attempts to prevent [prejudice] is a pretty major change from protecting the official state religion
Blasphemy laws weren't justified on the basis of protecting a religion, but to safeguard souls from being led astray into eternal damnation, or to stop the spread of harmful immoral activity. They were to serve the greater good, just like this law.
I disagree that blasphemy was to serve the greater good. This implies that extreme actions carried out in the name of theology are made in good faith [sic], without ulterior motives outside theology.
Excommunication was a perfectly good threat if the aim was genuinely to safeguard souls. The ulterior motive behind the escalated punishment and law, was the potential risk of allowing people to live their lives outside the church, or as part of another religion - an existential threat to the dominance of the Catholic church as an institution.
Agreed. That's more or less the basis of what I have said in all my comments in this thread. Maybe adding the word "designed" or "intended" into my comment would have made that clearer.
But this doesn't respond to the main point I was trying to make - this new law is not just a "switch from one god to another".
Personally, I dislike skeuomorphic design in digital interfaces. But still... very cool!