Good idea, TERRIBLE implementation. After activating only filters for "Low Effort", and "Contain Logical Fallacies" I get:
> "Who cares if it is? It's a great movie nonetheless"
3/5 Published!
> "Who cares if it is? It's a terrible movie nonetheless"
2/5 Revision requested: Calling a movie 'terrible' dismisses the enjoyment others may find in it and directs negativity at both the film and those who appreciate it.
Suggestion: "I personally don’t enjoy the movie, but I understand some people have different opinions about it."
So it's okay to generalize my opinion about it, but only if I liked it, otherwise I might hurt someone's feelings? Very double-plus-good vibe. I would never comment again on the site that uses this product.
Thanks for the feedback! We'll look into this kind of thing -- others have said the same thing, that two sides of the same coin get treated differently.
China is certainly lax, but the US doesn't allow autonomous ATTACK systems. For Attack systems it is always required that a human makes the judgement call when to attack.
Or least it didn't until the current regime.
The US does have autonomous defensive systems.
I could be wrong though, can you post your evidence? The closest I could find is loitering munitions.
Even so, a company shouldn't be forced to go against its ethics if those ethics help humans.
Drone pilots don't get any info about their target, certainly not enough to make a judgement call. If they object (or burn out) someone else is put in the chair.
People are conscripted, they put on the uniform and become legitimate targets? It might as well be a robot doing the shooting. Same difference.
The pilot becomes responsible for those outcomes. For example indiscriminately killing civilians for example is a war crime. Its easier to get an AI to commit war crimes than humans.
Perhaps but if the difference is significant I don't know. Everything changes then we try stretch rhetoric from stabbing someone with a sword to hypersonic missiles? We might hold the pilot responsible if they erase a building but I'm far less comfortable blaming them. We know the targets are actually picked by computers using metadata. The difference gets increasingly vague.
A good time to remember that the Open Library came to be thanks to the initial work of Brewster Kahle (founder of the Internet Archive) and Aaron Swartz (RIP) http://www.aaronsw.com/weblog/openlibrary
(I feel that I write a comment like this every few years)
The author catalog of harms is real. But it's worth noting that nearly identical catalogs were compiled for every major technological shift in modern history. The Internet destroyed print journalism, local retail, and enabled cyberbullying and mass surveillance. If we applied the same framework used here, Internet optimism in 2005 was also a form of "class privilege" (his term, I personally hate it).
And the pattern extends well beyond the Internet. For example, mechanized looms devastated weavers, the automobile wiped out entire trades while introducing pollution and traffic deaths, and recorded music was supposed to kill live performances.
In each case, the harms were genuine, the displacement was painful and unevenly distributed, and the people raising alarms were not irrational. They were often right about the costs. What they tended to miss was the longer trajectory: the way access to books, transportation, music, and information gradually broadened rather than narrowed, even if the transition was brutal for those caught in it.
History doesn't guarantees a good outcome for AI, but the author does advocates from a position of "class privilege": of having access to good lawyers, good doctors, and good schools already, and not feeling the urgency of tools that might extend those things to people who don't.
> but the author does advocates from a position of "class privilege": of having access to good lawyers, good doctors, and good schools already, and not feeling the urgency of tools that might extend those things to people who don't
I dunno I think you can also take a really dim view of whether society as currently structured is set up to use AI to make any of those things more accessible, or better.
In education, certainly we've seen large tech companies give away AI to students who then use it to do their work. Simultaneously teachers are sold AI-detection products which are unreliable at best. Students learn less by e.g. not actually doing the reading or writing, and teachers spend more of their time pointlessly trying to catch the very common practice.
In medicine, in my most recent job search I talked to companies selling AI solutions both to insurers and to healthcare providers, to more quickly prepare filings to send to the other. I think the amount of paperwork per patient is just going to go up, with bots doing most of the actual form-filling, but the proportion of medical procedures that gets denied will be mostly unchanged.
I am not especially familiar with the legal space, but given the adversarial structure of many situations, I'm inclined to expect that AI will allow firms to shower each other in a paperwork, most of which will not be read by a human on either side. Clients may pay for a similar or higher number of billable hours.
Even if the technology _works_ in the sense of understanding the context and completing tasks autonomously, it may not work for _society_.
Everyone says this as if the previous cycles of labor displacement could not compound and this be the last straw. Same with how phones cause shorter attention span and less thought and more social isolation. People will say "oh they said the same thing about books and TV and video games"
We could be at the end of the rope with how much we can displace unevenly and how much people will put up with another cycle of wealth concentration. Just like we might be at the end of the rope with how much our minds can be stunted and distracted before serious negative consequences occur
I think they are compounding. Prior to the internet we had more third spaces, less attention economy, fewer self-esteem problems comparing our lives against influencers', warehouse and delivery jobs without pissing in a bottle to stay employed, people were employed instead of doing gigs. We used to have privacy somewhat, that's gone.
It's been this overpowered tool for the wealthy to gather more wealth by erasing jobs and the data brokers to perform intense surveillance.
> But it's worth noting that nearly identical catalogs were compiled for every major technological shift in modern history.
And it has been... quite a correct view? In the past few decades the US cranked up its Gini index from 0.35 to ~0.5, successfully eliminated single-earner housebuyers[0]. It's natural to assume the current technology shift will eliminate double-earner housebuyers too. The next one would probably eliminate PC-buyers if we're lucky!
The scifi books were right in predicting future relationships would be poly, they just didn't explain it was because it was the only way people would be able to afford to live.
arguably the history of humanity was about automating humanity.
- teeth and nails with knives (in various shapes from bones to steel)
- feet with carriages and bicycles and cars
- hands with mills and factories on steam engines to industrial robots
Literaly every automation was meant to help humans somehow so, this naturally entailed an automation of some human function.
This automation is an automation of the human brain.
While the "definition" of what's human doesn't end here (feelings, etc.) , the utility does.
With loss of utility comes loss of benefits.
Mainly your ability to differentiate as a function of effort (physical or intellectual) gets diminished to 0. This poses some concerns wrt to ability to achieve goals and apsirations - like buying that house at some point or ensuring your childrens future, potentially vanish for large swaths of the population — the "unfortunates" - which are these it's hard to tell, but arguably the level of current resources (assets) becomes a better indicator of the future for generations to come, with work becoming less to none.
By freezing utility based on own effort you arguably freeze the structure of society in time. So yes, every instance sucked for the displaced party, but this one seems to be particularly broader (i.e. wider splash damage)
The term you're looking for is externalisation not automation. Check out "the fault of epimetheus"; & on the alienation of the machine by automation ca. 19late7s one of its intellectual predecessors: gilbert simondon
Thanks, both! Glad to get the explicit names for the things I'm "gesticulating" at. I haven't done any explicit reading on the topic, except for adjacent stuff like Analogia (Dyson), The coming wave (Suleyman) and saw talk by Terry Winograd that I thought was on point https://www.youtube.com/live/LcvYYXdXF8E.
I have and do want to read Superintelligence and will check out both Stiegler and Simondon.
The assumption in your comment is that those changes were all net good. In hindsight though, the automobile has had possibly existential costs for humanity, the internet has provided most benefit to those who most abuse its power, and so on. In the end, it doesn’t seem as though you’ve actually made any sort of case.
Is it? Do you include everyone that’s died or lost a loved one due to personal automobiles in that assessment?
We are so far post automobile that it’s hard to compare, but many of the benefits are illusionary when you consider how society has evolved with them as commutes for example used to be shorter. Similarly the air used to be far cleaner and that’s after we got rid of leaded gas and required catalytic converters decades ago.
How many people have lived or had a loved one saved due to automobiles?
We have the benefit of hindsight but we're also making judgment calls looking back on fuzzy recollections, forgetting just how the past used to be before an innovation came along.
I agree it’s difficult to do these calculations as society evolves with technology. Trains enable long distance evacuation from hurricanes. Street cars and subways allow for medical transportation but it looks very different than an ambulance. Similarly do we exclude helicopters assuming cars were simply banned rather than our failing to design IC engines or whatever.
That said, there are modern enclaves without cars mostly on islands or in very remote locations. They make due just fine without cars, it’s the low population density that’s at issue for medical care.
The automobile on its own was actually far less polluting than the horse wrt. air quality. It's just that there's a whole lot more of the former than there ever was of the latter. Even wrt. climate change, it turns out that horses produce methane emissions which are far worse for the climate than carbon dioxide.
1. Is a very hard question to answer, airborne lead pollution isn't fatal, but it does impact cognition. Perhaps ask yourself how many people lead lower quality lives because of it, and the flow-on effects from that lower cognition.
The people who did Freakonomics claimed that the drop in violent crime in the US in the 90s could be correlated to the phasing out of leaded fuel, but I'm not a statistician so can't speak to the accuracy of that correlation.
2. How would you even measure that? How would you define a better life thanks to a vehicle?
I feel like you're ice skating uphill a little given the deleterious effects of leaded fuels are well studied, but the question you're asking isn't.
That reason is along the lines of, “It is difficult to get a man to understand something when his salary depends on his not understanding it.”
Coal miners will fight for coal mines, the oil industry will fight for dependence on oil, and so on. Sometimes they’re aware of what they’re doing, but in the case of a comment like the above, apparently not so much.
I often wonder that if cable news was around say, during the American Civil War, how likely would the 13th, 14th, and 15th amendments have passed? I'd say extremely unlikely.
Throughout our entire race as a species, abusers have always fucked the commons to the extreme using whatever tools they have available.
I mean take something as "innocuous" as the cotton gin, prior to the cotton gin there was a real decline in slavery but once it became extremely easier to process cotton slavery skyrocketed. Some of the worse laws the US has ever passed, the fugitive slave act, was during this period.
To think that technological progress means prosperity is extremely delusional.
We're still dealing with the ramifications of nuclear weapons and the likelihood that someone makes a committed nuclear attack will assuredly happen again in our species, just hoping that it doesn't take out all life on Earth when it happens.
Seriously, these types of comments are always really narrow in their view.
Industrialization has rapidly accelerated planet wide climate change that will have disastrous effects in many of our lifetimes. A true runaway condition will really test the merit of those billionaire bunkers.
All for what? a couple hundred years of "advancement"? A blink in the lifespan of humanity, but dooms everyone to a hyper-competitive death drive towards an unlivable world.
As a society, our understanding of "normal" has narrowed down to the last 80 years of civilization. A normal focused around consumption, which stands to take it all away just as fast.
The techno-optimists never seriously propose any meaningful solution to millions losing their livelyhoods and dignity so Sam Altman can add an extension to his doomsday bunker. They just go along with it as if they'll be invited down to weather the wet-bulb temperature.
> The author catalog of harms is real. But it's worth noting that nearly identical catalogs were compiled for every major technological shift in modern history.
I think both the scale (how many industries will be impacted effectively simultaneously) and speed of disruption that could be caused by AI makes it very different from anything we have seen before.
I think it will be big, but I don't think it's bigger than the automation of manufacturing that began during the Industrial Revolution.
Think about the physical objects in the room you're in right now. How many of them were made from start to finish by human hands? Maybe your grandmother knitted the woollen jersey you're wearing -- made from wool shorn using electric shears. Maybe a clay bowl your kid made in a pottery class on the mantelpiece. Anything else?
Local retail and specialty print media are alive and well. Mass-market newspapers may be in trouble, but that's because it turns out most people were buying those for the classifieds, not really for the news. Even cyberbullying is mostly a matter of salience: it takes something that has always existed in the physical realm (bullying behavior) and moves it to the cyber environment where the mass public becomes aware of it.
> Mass-market newspapers may be in trouble, but that's because it turns out most people were buying those for the classifieds, not really for the news.
Genuinely interested in some sort of data on this.
My working assumption was that print news media was dying through a combination of free news availability on the internet, shifting advertising spending as a result, shifting ‘channels’ to social media, and shifting attention spans between generations.
I don’t think we can haphazardly apply history like this, it’s never the same, we just like to find patterns where there are none.
The biggest harm that would come from AI is ”everything at once”, we’re not talking about a single craft, we’re talking about the majority of them. All while moving the control of said technology to even fewer privatized companies, the printing press didn’t centralize all knowledge and utility to a few entities, it spread it. AI is knowledge and history centralized, behind paywalls and company policies. Imagine picking up a book about the history of music and on every second page there’s an ad for McDonald’s, this is how the internet ended up and it’s surely how LLM providers will end up.
And sure, some will run some local model here and there, but it will irrelevant in a global context.
"it became clear that there was no conspiratorial algorithmic suppression". Yes, the Twitter files showed that the suppression was done mostly by humans.
the twitter files, what a laugh. Can you point to a particular part of the twitter files that was not obviously overblown, wrong, or subsequently thoroughly discredited that supports your claim of conspiratorial right wing suppression? Here: https://en.wikipedia.org/wiki/Twitter_Files
reply