They don’t allow unlimited “extraction” of wealth. It is inherently limited by the need for people to take the other side of a trade.
Importantly, people who either thought they had better information (and were sadly wrong) or people who were simply gambling. It’s not like prediction markets are taking money from orphanages.
SpaceX has flown 18 crewed launches on a single type of vehicle, all in the 2020s, all of them either doing an ISS run or an orbital launch. NASA has had over 200 manned launches spanning well over half a century, flown on all sorts of tech, with vastly different designs, kinds of engineering culture, mission profiles. They were the organization that did first-of-its-kind missions. You just bringing up two numbers makes it seem like the companies existed at the same time and were essentially equals, and not like there's a historical innovator that spilled some blood while pushing the limits and a modern private business that made some innovations but is still treading on ground that's so well-known because of all the experience and knowledge we already gathered from those past risky ventures.
I was quite disappointed with the essay when I originally read it, specifically this paragraph:
> This is extremely realistic. This is already real. In particular, this is the gig economy. For example, if you consider how Uber works: in practical terms, the Uber drivers work for an algorithm, and the algorithm works for the executives who run Uber.
There seems to be a tacit agreement in polite society that when people say things like the above, you don't point out that, in fact, Uber drivers choose to drive for Uber, can choose to do something else instead, and, if Uber were shut down tomorrow, would in fact be forced to choose some other form of employment which they _evidently do not prefer over their current arrangement_!
Do I think that exploitation of workers is a completely nonsensical idea? No. But there is a burden of proof you have to meet when claiming that people are exploited. You can't just take it as given that everyone who is in a situation that you personally would not choose for yourself is being somehow wronged.
To put it more bluntly: Driving for Uber is not in fact the same thing as being uploaded into a computer and tortured for the equivalent of thousands of years!
> in fact, Uber drivers choose to drive for Uber, can choose to do something else instead
Funny that you take that as a "fact" and doubt exploitation. I'd wager most Uber drivers or prostitutes or maids or even staff software engineers would choose something else if they had a better alternative. They're "choosing" the best of what they may feel are terrible options.
The entire point of "market power" is to force consumers into a choice. (More generally, for justice to emerge in a system, markets must be disciplined by exit, and where exit is not feasible (like governments), it must be disciplined by voice.)
The world doesn't owe anyone good choices. However, collective governance - governments and management - should prevent some people from restricting the choices of others in order to harvest the gain. The good faith people have in participating cooperatively is conditioned on agents complying with systemic justice constraints.
In the case of the story, the initial agreement was not enforced and later not even feasible. The horror is the presumed subjective experience.
I worry that the effect of such stories will be to reduce empathy (no need to worry about Uber drivers - they made their choice).
> I'd wager most Uber drivers or prostitutes or maids or even staff software engineers would choose something else if they had a better alternative.
Yes, that's what I said, but you're missing the point: Uber provided them with a better alternative than they would have had otherwise. It made them better off, not worse off!
There's a thought (and real) experiment about this that I find illuminating.
Imagine that you are sitting on the train next to a random stranger that you don't know. A man walks down the aisle and addresses both of you. He says:
"I have $100 and want to give it to you. First, you must decide how to split it. I would like you (he points to you) to propose a split, and I would like you (he points to your companion) to accept or reject the split. You may not discuss further or negotiate. What do you propose?"
In theory, you could offer the split of $99 for yourself and $1 for your neighbor. If they were totally rational, perhaps they would accept that split. After all, in one world, they'd get $1, and in another world, they'd get $0. However, most people would refuse that split, because it feels unfair. Why should you collect 99% of the reward just because you happened to sit closer to the aisle today?
Furthermore, because most people would reject that split, you as the proposer are incentivized to propose something that is closer to fair so that the decider won't scuttle the deal, thus improving your own best payout.
So I agree - Uber existing provides gig economy workers with a better alternative than it not existing. However, that doesn't mean it's fair, or that society or workers should just shrug and say "well at least it's better today than yesterday."
As usual in life, the correct answer is not an extreme on either side. It's some kind of middle path.
Many countries have minimum wages for many jobs [1].
There is a tacit agreement in polite society that people should be paid that minimum wage, and by tacit agreement I mean laws passed by the government that democratic countries voted for / approved of.
The gig economy found a way to ~~undermine that law~~ pay people (not employees, "gig workers") less than the minimum wage.
If you found a McDonalds paying people $1 per hour we would call it exploitative (even if those people are glad to earn $1 per hour at McDonalds, and would keep doing it, the theoretical company is violating the law). If you found someone delivering food for that McDonalds for $1 per hour we call them gig workers, and let them keep at it.
I mean yeah, it's not as bad as being tortured forever? I guess? What's your point?
Minimum wage is a lower class of violation than most worker exploitations.
Uber drivers are over the minimum wage a lot of the time, especially the federal one. Nowhere near this $1 hypothetical.
A big one is that the actual wage you get is complicated. You get paid okay for the actual trips, as far as I'm aware. But how to handle the idle time is harder. There are valid reasons to say you should get paid for that time, and valid reasons to say you shouldn't get paid for that time.
I pay for YouTube premium and it’s one of my happiest expenditures. YouTube is a miraculous, unbelievable treasure trove. Learn any language, any musical instrument, any academic subject. TV clips from the 80s that someone taped in VHS for some reason. Isaac Arthur, Veritaseum, numberphile. I’ve gotten more value from YouTube than any other single site on the internet, and it’s not close!
Time-keeping is vastly cheaper. People don't want grandfather clocks. They want to tell time. And they can, more accurately, more easily, and much cheaper than their ancestors.
The guillotine remark resonates in today reality because people feel this scam. Tone-policing the symptom while ignoring the cause is naive.
The C15 thread shows exactly why: It beats modern trucks in pure utility. Today we are paying more for less value.
It is exactly the wealth extraction Ray Dalio describes in Principles for Dealing with the Changing World Order (Stage 5 of the debt cycle), resulting in internal conflict.
To be fair, they killed a lot of people before killing the royalty as well. And then, when you dig a bit deeper, you realise that the royalty did quite a bit of the killing itself, just shortly before. It's amazing, it's almost like History does not happen in a. vacuum, and events depend on the cultural context and other events that happened previously.
I'm not sure you have to be terribly right wing to say that a "societal movement" which includes something called "The Reign of Terror", in which tens of thousands of people were executed, was a bad thing. (https://en.wikipedia.org/wiki/French_Revolution#Reign_of_Ter...)
There are some situations with tricky lifetime issues that are almost impossible to write without this pattern. Trying to break code out into functions would force you to name all the types (not even possible for closures) or use generics (which can lead to difficulties specifying all required trait bounds), and `drop()` on its own is of no use since it doesn't effect the lexical lifetimes.
Conversely, I use this "block pattern" a lot, and sometimes it causes lifetime issues:
let foo: &[SomeType] = {
let mut foo = vec![];
// ... initialize foo ...
&foo
};
This doesn't work: the memory is owned by the Vec, whose lifetime is tied to the block, so the slice is invalid outside of that block. To be fair, it's probably best to just make foo a Vec, and turn it into a slice where needed.
Unless I'm misunderstanding, you'd have the same lifetime issue if you tried to move the block into a function, though. I think the parent comment's point is that it causes fewer issues than abstracting to a separate function, not necessarily compared to inlining everything.
There actually is one idea for cleaning up debris in high orbit: You launch tons of very fine powder into the orbits you wish to clear. These orbiting particles create drag on anything up there, so that their orbits degrade much faster. But the because the particles themselves are so tiny, they have a very low ballistic coefficient, and will deorbit quickly.
Hmm, seems like it would work for 800 km, but maybe not for 1000+ km? Just based on what he says there, which is that each 100 km increase is a factor of 10 in deorbit time, and it's 1 year at 800 km.
I'm not sure I believe that operational satellites would be unaffected by sustained bombardment with tungsten particles at orbital velocity (x2 for head on collisions), even if they are 10 microns.
If we assume there's some altitude that's so polluted by debris that we need to intervene, it might not have that many functional satellites left. Cleanup the orbit in 1 year might be something the world could agree to if the alternative is waiting 5 years for it to clear up by itself.
I think the burden to show that AI is not thinking lies on the skeptics. There are two broad categories of arguments that skeptics use to show this, and they are both pretty bad.
The first category is what I'd call "the simplifying metaphor", in which it is claimed that AIs are actually "just" something very simple, and therefore do not think.
- "AIs just pick the most likely next token"
- "AI is just a blurry jpeg of the web" (Ted Chiang)
- "AIs are just stochastic parrots"
The problem with all of these is that "just" is doing an awful lot of work. For instance, if AIs "just" pick the most likely next token, it is going to matter a lot _how_ they do that. And one way they could do that is... by thinking.
There are many different stochastic processes that you could use to try to build a chat bot. LLMs are the only one so far that actually works well, and any serious critique has to explain why LLMs work better than (say) Markov chains despite "just" doing the same fundamental thing.
The second category of argument is "AIs are dumb". Here, skeptics claim that because AI fail at task X, they aren't thinking, because any agent capable of thought would be able to do task X. For instance, AIs hallucinate, or AIs fail to follow explicit instructions, and so on.
But this line of argument is also very poor, because we clearly don't want to define "thinking" as "a process by which an agent avoids all mistakes". That would exclude humans as well. It seems we need a theory that splits the universe of intellectual tasks into "those that require thinking" and "those that don't", and then we need to show that AI is good only at the latter, while humans are good at both. But unless I missed it no such theory is forthcoming.
"Splitting the universe of intellectual tasks" would be a gigantic job. Various AI implementations already fail at so many tasks it seems reasonable for skeptics to claim the AI is not yet thinking, and the burden is on the implementers to fix that.
> "Splitting the universe of intellectual tasks" would be a gigantic job
What I mean is a theory that allows you to categorize any given task according to whether it requires "thinking" or not, not literally cataloging all conceivable tasks.
Importantly, people who either thought they had better information (and were sadly wrong) or people who were simply gambling. It’s not like prediction markets are taking money from orphanages.
reply