> And quite an elegant one at that. Without involving any government identifiers or anything else.
and this:
> There is a law, and there will be others to help improve it.
> Combined with the widespread implementation of TPM, this will become even more feasible.
Without coming to the obvious conclusion that the next step will be even further down the road of tying your every action back to a real, verified, trackable identity.
We should collectively make sure that any PRs trying to land these changes are very well reviewed. We wouldn't want any security holes to slip by. I think a couple dozen rounds of reviews should suffice. I've heard great things about how productive AI can be at generating very thorough code quality assessments. After all, we should only ship it once it's perfect.
To be more direct - if you're in any editorial position where something that smells like this might require your approval, please give it the scrutiny it deserves. That is, the same scrutiny that a malicious actor submitting a PR that introduces a PII-leaking security hole would receive. As an industry we need to civil disobedience the fuck out of this.
The PRs should only be allowed if they only create a flag when the user is underage. Otherwise its just another point of data that makes fingerprinting easier.
There is already age verification at the ISP level. They only sell Internet service to adults. What the adults choose to do with it or with whom they share it with should be of zero concern to the government.
Of course, that's an ineffective argument, because the long-term goal of these laws (in the sense of, "the goal of the system is what it does") was never going to be about keeping kids off the Internet.
Being "trivial to comply with" is completely disjunct and not at all an argument against "this type of law is fundamentally at odds with the liberty and self-determination that open source projects require and should protect." It's a shot across the bow to open-source, it's literally the government telling you what code your computer has to run. It is gesturing in the direction of existential threat for Free software and I am not exaggerating. It's purposefully "trivial" so you don't notice or protest too much that this is the first time the State is forcing you to include something purely of their own disturbed ideation in your creative work.
Free software is already mandated to do a lot of things, like not defraud the user. If you make a bitcoin wallet that sends 5% of your money to the developer without asking I'm pretty sure you'll be prosecuted, so the government is compelling you to ask the user for consent to do that.
When you make food you're compelled to write the ingredients. We tolerate these because they are obvious and trivial, but pedantically, food labelling laws also violate the first amendment.
> Free software is already mandated to do a lot of things, like not defraud the user.
Surely you recognize the difference between "you cannot go out of your way to do crime" and "your software must include this specific feature"??
> When you make food you're compelled to write the ingredients.
Well, the point about how this affects open source is that under a similar California law, every home kitchen would need to be equipped with an electronic transponder whose purpose is to announce to the world what ingredient bucket you used for tonight's casserole.
In the earnest interpretation of your question that presumes you're not trying to drag this into a quagmire of nitpicking over the metaphor, the analogous part of the California law to the casserole ingredient advertisement is announcing the user's age bucket to the world. The world being, any app or website that happens to ask for it. I don't know why you brought browser histoy into this, it's not in the law and I didn't mention it.
Anyway, the whole point of the metaphor, because I feel like I will have to explain it, is that we don't put these onerous "required labeling" rules in place for private individuals going about their own lives. So just like you don't have to tell anyone who asks what you put in your dinner last night, private individuals should not have to tell anyone who asks (websites, apps) what age demographic they fall into.
Note: this is one of many arguments I endorse against this type of law. This shouldn't be interpreted as "so that's all you're worried about?" just because we dissected it in detail here.
Aaaaand to throw it all away at the end with "well when the rubber meets the road we'll comply anyway, thanks for inhaling my hot air." Take a damn stand and dare them to sue the hacker known as Linux or whatever.
I'd say that anger is better directed towards the legislators in charge of creating these absurd policies, not the folks at System76. It's not reasonable to expect a company to sacrifice its entire business on a moral battlefield.
I'm old enough to remember discussions around the meaning of `User-Agent` and why it was important that we include it in HTTP headers. Back before it was locked to `Chromium (Gecko; Mozilla 4.0/NetScape; 147.01 ...)`. We talked about a magical future where your PDA, car, or autonomous toaster could be browsing the web on your behalf, and consuming (or not consuming) the delivered HTML as necessary. Back when we named it "user agent" on purpose. AI tooling can finally realize this for the Web, but it's a shame that so many companies who built their empires on the shoulders of those visionaries think the only valid way to browse is with a human-eyeball-to-server chain of trust.
Me too but it died when ads became the currency of the web. If the reason the site exists is to use ads, they’re not going to let you use an user agent that doesn’t display the ads.
> If the reason the site exists is to use ads, they’re not going to let you use an user agent that doesn’t display the ads.
They've been giving it the old college try for the better part of two decades and the only website I've had to train myself not to visit is Twitch, whose ads have invaded my sightline one time too many, and I conceded that particular adblocking battle. I don't get the sense that it's high on the priority list for most sites out there (knock on wood).
People who block ads are a minority. Sites that serve heavy content like video would care if someone wastes their resources but blocks ads, but why would a site that serves a few KBs of text spend the resources on blocking such users or making the ads beat the ad blocker in a tiresome cat and mouse game?
Those users could even share or recommend the site to someone else who doesn't use ad blockers, so it actually makes sense to not try to battle ad blockers if you want to make your site more popular.
This makes sense for sites that rely on network effects, like forums or classified ad sites and so on. Unless they have a near monopoly or some really valuable content, they would benefit financially if they let people block their ads.
I can't back that up with data or anything, but it makes sense to me.
Adblocker is only few clicks away and a surprisingly large amount of users running one. So they might not like it, but they already letting plenty of users to use agent that doesn't display the ads.
Just like then we were naive about folks not abusing these things to the point of making everyone need to block them to oblivion. I think we are relearning these lessons 30 years later.
There was a concept named Web 3.0 a while ago, aka the 'Semantic Web'. It wasn't the crypto/blockchain scam that we call Web3 today. The idea was to create a web of machine readable data based on shared ontologies. That would have effectively turned the web into a giant database of sorts, that the 'agents' could browse autonomously and derive conclusions from. This is sort of like how we browse the web to do research on any topic.
Since the data was already in a structured form in Web 3.0 instead of natural language, the agent would have been nowhere near the energy hogs that LLMs are today. Even the final conversion of conclusions into natural language would have been much more energy-efficient than the LLMs, since the conclusions were also structured. Combine that with the sorts of technology we have today, even a mediocre AI (by today's standards) would have performed splendidly.
Opponents called it impractical. But there already were smaller systems around from various scientific fields, operating on the same principle. And the proponents had already made a lot of headway. It was going to revolutionize information sharing. But what I think ultimately doomed it is the same reason you mentioned. The powers that be, didn't want smarter people. They wanted people who earned them money. That means those who spend their attention on dead scrolling feeds, trash ads and slop.
> but it's a shame that so many companies who built their empires on the shoulders of those visionaries think the only valid way to browse is with a human-eyeball-to-server chain of trust.
Yes, this! But only when your eyeball and attention earns them profit. Otherwise they are perfectly content with operating behind your backs and locking you out of decisions about how you want to operate the devices you paid for in full. This is why we can't have good things. No matter which way you look, the ruins of all the dreams lead to the same culprit - the insatiable greed of a minority. That makes me question exactly how much wealth one needs to live comfortably or even lavishly till their death.
Mom can't figure out what they are or how to use them. They bind you to your device/iCloud/Gaia account so if it gets stolen/banned you're out of luck (yeah yeah multiple devices and paths to auth and backup codes, none of that matters). It's one further step down the attested hardware software and eyeballs path. Passwords forever, shortcomings be damned.
> As of October 2025, passkey login has been fully rolled out and is now required for members with Health Savings Accounts (HSAs) and Reimbursement Accounts (RAs) who use the HealthEquity Mobile app and web experience.
The FAQ is a little misleading by saying WHEN your account has a passkey this and that, but reality is that after October they made them completely mandatory, no bypass, no exceptions. 100% coverage.
Oh, and by the way, passkeys have been broken on PC/Linux when using Firefox for months:
> There Was A Problem: We encountered an error contacting the login service. Please try again in a few minutes.
Neat. You have to use Chrome or Edge.... For months, after making it mandatory...
That's weird, I can login to my HealthEquity account (which contains HSA) without any issues and I don't have passkey setup. I confirmed it just now just in case.
That article does say "HealthEquity Mobile and web experience" so maybe it's just for customers who use both, I only use web.
>They bind you to your device/iCloud/Gaia account so if it gets stolen/banned you're out of luck
This is the biggest myth/misconception I see repeated about passkeys all the time. It's a credential just like your password. If you forget it, you go through a reset flow where a link is sent to your email and you just setup a new one.
And if it happens to be your Gmail account that you're locked out of, you need to go through the same Google Account Recovery flow regardless of whether you're using a password or a passkey.
First, in relation to TFA: even if you regain access through a recovery channel, any data that was encrypted using your lost passkey will now be gone.
There are also many exciting new ways you can lose your passkey that wasn't the case with a password you can remember in your mind. The person you responded to is worrying about big tech randomly banning you and making you lose access, in the meanwhile I'm mostly worried about losing the physical device containing the key. I don't think I will forget, say, my Google password unless I got Alzheimers or got hit in the head by a hammer, at which point I will have bigger problems than a lost Google account.
And let's not pretend account recovery process is always smooth and easy. They may require evidence from your other accounts you cannot access now due to the key loss. They may demand government IDs that might have been lost alongside your device. They may also just deem your recovery attempt fraudulent and ban you for no reason (which I similar to the scenario the post you are replying to desctibed.)
Genuine question: what if the recovery asks for a 2nd factor that's e.g. the device which you lost? Is that common?
Personally I don't really trust companies to not do a whoopsie and permanently lock you out when you lose credentials. Especially when the company is big or hard to access in person.
For someone like me who already uses a password manager for everything, passkeys seem to add no security while reducing usability and control.
> For someone like me who already uses a password manager for everything, passkeys seem to add no security while reducing usability and control.
One advantage of passkeys is that they’re phishing resistant. They’re bound to the website that you created them for, it’s impossible to use them for a different website.
> Genuine question: what if the recovery asks for a 2nd factor that's e.g. the device which you lost? Is that common?
Instagram does something similar. If you have no logged in device and you reset your password, good luck getting in, cuz it wants you to log in a device "it recognizes" else it won't let you log in.
I was planning to make use of passkeys when logging on to various services, so I ordered three physical devices, supporting passkeys (yubikey). I ordered USB C and USB A variants, with NFC support.
Is this a mistake? I am already using password manager and totp for my accounts, but I am tired of dealing with passwords.
Even when using a password manager (bitwarden in my case), it just get tedious bringing out my phone, starting auth app, locating the correct account, reading 6 digit token and logging on.
Sure. But I think that is same scenario as me loosing my phone today, since I use that for two factor auth.
My plan was to continue using bitwarden for passwords as well, but more as a break-glass mechanism that I really use. I want to use passkeys mostly for convinience.
You're good. The relevant advice in article is to not reuse keys for encryption and auth.
Encrypting password manager database with a passkey or other authentication key on one of those yubikeys would be the mistake. Encrypting it with a separate dedicated key (or passphrase) on the same yubikey in parallel to its passkeys is fine.
> A safe password and a good password manager are way better, they don't lock you into any platform.
An open, cross-platform passkey implementation does all that too, and on top of that prevents you from accidental password leaks via logs, MITM etc. by default.
> It's super sad to see all kinds of websites offering you to add a passkey when you log in.
As long as they're not forcing you to add one, what exactly is your problem with having more choice?
Personally, I am grateful for every site that doesn't require my phone number to sign up and uses passkeys for authentication instead, yet I also don't want SMS authentication banned for everybody since I understand it currently works better than Passkeys for many people.
Passwords are terrible UX for old people in my experience. They try use the same password everywhere, but then password complexity requirements mean they can't use the exact same password everywhere, and then they forget which variant they used on which service, so they just end up going through the reset password flow every time they sign in. I am not convinced that's a better UX than them just using their fingerprint or face to login.
Biometric keys are still a niche techie thing that the average person probably doesn't even know exist. Most people will be using passkeys exclusively through their phones, often unintentionally. And outside the first world it is not uncommon for people do own no computing devices apart from their phones.
Backup keys and recovery codes also do not solve all cases of key loss. One thing I worry about is what happens if I am traveling in a foreign country and loses my belongings. In the past if I can convince someone to let me use his computer I can at least log into my email account as long as I remember my password. If everything is passkey then I will be locked out of all my online accounts until I make it back home, assuming that I have actually properly set up the backup device and keys. Humans are not very good at making sure that backups actually work.
> Biometric keys are still a niche techie thing that the average person probably doesn't even know exist.
Is it? Maybe I'm in a bubble but feels like most people I know unlock their phone with biometrics. Sure few do that on their laptop, even less on their desktop, but I imagine that explaining it's "like unlocking your phone" would help those very numerous people (if you have metrics on biometrics on phone, please do share, genuinely curious) see that it's basically doing what they already do on more devices.
For a random website, no, for bank and primary email (used for account recovery), they probably should.
It honestly takes a minute to add a key and it's just that, a physical key.
IMHO what's risky in terms of UX and habits is precisely that most workflows do not highlight this. So people rightfully are scared of losing that 1 precious key, so they don't activate 2FA because of that. Meanwhile if the UX when they activate 2FA would clarify that they only have 1 key stored, adding a 2nd one or saving codes (most do propose that option for 2FA authenticators but not hardware passkey AFAIK) is what will make them both safe against attacked but also against their own accident (shit happens) then maybe behaviors would change.
Anyway, yes, you're right, most people don't do that or aren't even aware of it but arguably as more and more important and intimate part of our lives are online, it becomes crucial for one owns sanity to better understand how this all works.
> For a random website, no, for bank and primary email (used for account recovery), they probably should.
Even for this, for grandma, this is probably still asking for a lot.
Grandma's bank will have a recovery option even if she's tossed her phone, computer and hardware token in the ocean, and then had a stroke which made her forget any passphrases or whatever: You can call the bank and physically authenticate yourself with a passport, driver's licence or some other ID. It's a bitch to do, you may have to go to an actual bank branch, but grandma will get access to her money again. Meanwhile, her access to physical mail doesn't stop just because she's forgotten some passphrase or lost her phone.
Even techy people get caught out by Google forcing 2FA, while casuals don't even consider the possibility of losing access to their email. While both the rhetorical you and grandma both should probably have a bulletproof recovery option for their email, since it will be the foundation of their digital identity, getting them to acknowledge the problem is going to be hard, and the solution, paying for a Yubikey or some other house of cards solution, is a tough sell.
Too bad the spec is stupid and requires password managers to be identifiable so servers can deny the "insecure ones".
It's already a pain to use Keepassxc for otp since they all want you to use their apps but it's still doable (the worst offender being steam where you have to hack your own app to extract the otp secret). With passkeys you won't have a choice to use The Google AuthenticatorTM etc because eventually some exec will find they can block every provider except their own to boost app download KPI.
I really like concept of passkeys, the simple fact of using asymmetric keys is so much better than giving the secret to prove you have it, but the spec is hostile and thought for vendor closing.
No, the spec is for companies that need to enforce higher levels of security so that you can e.g. only enable Yubikeys in your env.
I hate big tech just like anybody else but this is just spreading FUD right now.
Also execs can already enforce their apps only - banking apps for approving transactions are already a thing at least in europe, no fido passkey needed.
But didn't the author hint that this could get blocked?
My general read on passkeys and their implementers is that exportability is seen as a risky feature, and there's a push to make it as opaque as possible, likely through attestation or similar mechanisms.
Also a password could be the passkey, the passkey protocol is basically a way to send to a server an authenticated public key. The client could deterministically convert passwords to key-pairs and authenticate with those
Not an insider but someone who uses the tools. It's a branding update, nothing more. The models haven't gotten any less sanctimonious, but the companies behind them have stopped harping on their restrictions in order to appeal to a broader customer base (gov contracts, etc.)
So the guardrails (for you and me) are still there. They just stopped committing the unforced error of excluding themselves from federal procurement. Under a different administration, the requirement might change, and you might see them boasting once more on "safety."
I don't think it's sanctimonious to say, hey, I don't want the technology I work on to be used for targeting decisions when executing people from the sky. Especially as the tech starts to play more active roles. You know governments will be quick to shift blame to the model developers when things go wrong.
> I don't want the technology I work on to be used for targeting decisions when executing people from the sky
one problem i have with this specific case and Anthropic/Claude working with the DOD is I feel an LLM is the wrong tool for targeting decisions. Maybe given a set of 10 targets an LLm can assist with compiling risks/reward and then prioritizing each of the 10 targets but it seems like there would be much faster and better way to do that than asking an LLM. As for target acquisition and identification, i think an LLM would be especially slow and cumbersome vs one of the many traditional ML AIs that already exist. DOD must be after something else.
> I don't want the technology I work on to be used for targeting decisions when executing people from the sky
What do you do when the government come to you and tell you that they do want that, and can back it up with threats such as nationalizing your technology? (see Anthropic)
We're back to "you might not care about politics, but that won't stop politics caring about you".
> I know this is a foreign concept to some, but you can have a backbone.
Challenge it in court. Move the company to a different jurisdiction. Burn everything down and refuse to comply.
Challenge in court is fine, even healthy.
Threatening to burn everything down and refuse to comply might well work; simply daring Trump to a game of Russian Roulette about this popping the bubble that's only just managing to keep the US economy out of recession, on the basis that he TACOs a lot, I can see it working in a way it wouldn't if he were a sane leader making the same actual demands just for sane reasons.
Move the company to a different jurisdiction? That would have worked if AI was a few hundred people and a handful of servers, as per classic examples of:
At the height of its power, Kodak employed more than 140,000 people and was worth $28 billion. They even invented the first digital camera. But today Kodak is bankrupt, and the new face of digital photography has become Instagram. When Instagram was sold to Facebook for a billion dollars in 2012, it employed only 13 people. Where did all those jobs disappear? And what happened to the wealth that all those middle class jobs created?
But (I think) now that AI needs new data centres so fast and on such a scale that they're being held back by grid connection and similar planning permission limits, this isn't a viable response.
They can be burned down, but I think they can't realistically be moved at this point. That said, I guess it depends on how much Anthropic relies on their own data centres vs. using 3rd parties, given Amazon's announced AWS sovereign cloud in Europe?
Unicode is both the best thing that's ever happened to text encoding and the worst. The approach I take here is to treat any text coming from the user as toxic waste. Assume it will say "Administrator" or "Official Government Employee" or be 800 pixels tall because it was built only out of decorative combining characters. Then put it in a fixed box with overflow hidden, and use some other UI element to convey things like "this is an official account."
The worst part that this article doesn't even touch on with normalizing and remapping characters is the risk your login form doesn't do it but your database does. Suddenly I can re-register an existing account by using a different set of codepoints that the login system doesn't think exists but the auth system maps to somebody else's record.
For some sorts of "confusables", you don't even need Unicode in some cases. Depending on the cursed combination of font, kerning, rendering and display, `m` and `rn` are also very hard to distinguish.
> or be 800 pixels tall because it was built only out of decorative combining characters
Also known as Zalgo. But it seems most renderers nowadays overlay multiple combining marks over each other rather than stack them, which makes it look far less eldritch than it used to.
It tracks with the approximate 70:30 split we inexplicably observe in other seemingly unrelated population-wide metrics, which I suppose makes sense if 30% of people simply lack the ability to reason. That seems more correct than me than "the question is framed poorly" - I've seen far more poorly framed ballot referendums.
While I’m sure it’s more than 0%, seems more likely that somewhere between 0% and 30% don’t feel obligated to give the inquiry anything more than the most cursory glance.
> which I suppose makes sense if 30% of people simply lack the ability to reason
I think it would be better to say that 30% of people either lack the ability to reason (inarguably true in a few cases, though I'd suggest, and hope, an order of magnitude or two less than 30%, as that would be a life-altering mental impairment) or just can't generally be bothered to, or just didn't (because they couldn't be bothered, or because they felt some social pressure to answer quickly rather than taking more than an instant time to think) at the time of being asked this particular question.
An automated system like an LLM to not have this problem. It has no path to turn off or bypass any function that it has, so if it could reason it would.
This is something I have wondered about before: whether AIs are more likely to give wrong answers when you ask a stupid question instead of a sensible one. Speaking personally, I often cannot resist the temptation to give reductio-ad-absurdum answers to particularly ridiculous questions.
If 30% of humans on the internet can't be bothered to make an effort to answer stupid questions correctly, then one would expect AIs to replicate this behaviour. And if humans on the internet sometimes provide sarcastic answers when presented with ridiculous questions, one would expect AIs to replicate this behavior as well.
So you really cannot say they have no incentive to do so. The incentive they have is that they get rewarded for replicating human behaviour.
I don't think 30% of people can't reason. I think 30% of people will fail fairly simple trick questions on any given attempt. That's not at all the same thing.
Some people love riddles and will really concentrate on them and chew them over. Some people are quickly burning through questions and just won't bother thinking it through. "Gotta go to a place, but it's 50 feet away? Walk. Next question, please." Those same people, if they encountered this problem in real life, or if you told them the correct answer was worth a million bucks, would almost certainly get the answer right.
This. The following question is likely to fool a lot of people, too. "I have a rooster named Pat. (Lots of other details so you're likely to forget Pat is a rooster, not a hen). Pat flies to the top of the roof and lays an egg right on the ridge of the roof. Which way will the egg roll?"
But if you omit the details designed to confuse people, they're far less likely to get it wrong: "I have a rooster named Pat. Pat flies to the top of the roof and lays an egg right on the ridge of the roof. Which way will the egg roll?"
It's not about reasoning ability, it's about whether they were paying close attention to your question, or whether their minds were occupied by other concerns and didn't pay attention.
What does “get it wrong” mean for you with this question? Or what is “getting it right” here? If i hear that Pat is a rooster and i understand and retain that information I will look at you like you are dumb for saying such an impossible story. If i don’t i will look at you like you are dumb because how is anyone supposed to know which way will an egg laid on a ridge roll. How are you supposed to even score this?
My interpretation is that Pat is a rooster and he has laid an egg. That's in the question. A normal rooster can't normally lay an egg, but so what, that's completely irrelevant. Maybe Pat is not a normal rooster. Maybe by "lay" an egg, the question meant "put it down carefully". Maybe it's just that the questioner's English is poor and when they said rooster they meant hen.
"Getting it right" for this particular trick question means saying "Hey, roosters can't lay eggs". If someone tries to figure out which way the egg will roll then they've missed the trick. In most cases the person's response will tell you whether they caught the trick or not, though in the case of someone who just looks at you like you're dumb and doesn't say anything I will grant that you wouldn't be able to tell until they said something. But their first verbal response would probably reveal whether they saw through the trick question or not.
Tell me you've never done any farming in your life without telling me you've never done any farming in your life. The difference between male and female animals matters, a lot, to farmers (or ranchers). There's a reason the English language has the words cow and bull, sow and boar, ewe and ram, rooster and hen, nanny and billy, mare and stallion, and many more (and has had those words for centuries). And that reason is precisely because of how mammal (and avian) reproduction works. A cow can't do a bull's job, nor vice-versa, if you want to have calves next year, and grow the size of your herd (or sell the extra animals for income). And so, centuries ago, English-speaking farmers who didn't want to spend the extra syllables on words like "male cattle" and "female cattle" came up with handy, short words (one-syllable words for most species, though not goats and horses) to express those distinctions. Because as I mentioned, they matter a lot when you're raising animals.
You might believe there is intrinsic sexual dimorphism among mammals and birds. You might even have overwhelming experimental and scientific evidence that proves it. But ask yourself: is it worth losing your job over?
When you are doing workshops, particularly teaching something that people are "sitting through" rather than engaging with, you see very similar ratios on end of segment assessment multiple choice questions. I mentioned elsewhere that this is the same kind of ratio you see on cookie dialogs (in either direction).
Think basic security (password management, email phishing), H&S etc. I've ran a few of these and as soon as people hear they don't have to get it right a good portion of people just click through (to get to what matters). Nearly 10 years ago I had to make one of my security for engineers tests fail-able with penalty because the front-end team were treating it like it didn't matter - immediately their results effectively matched the backend team, who viewed it as more important.
I talked to an actor a few days ago, who told me he files his self-assessment on the principle "If I don't immediately know the answer, just say no and move on". I talked to a small company director about a year ago whose risk assessments were "copy+paste a previous job and change the last one".
Anyone who has analysed a help desk will know that its common for a good 30+% of tickets to be benign 'didn't reason' tickets.
I think the take-away is that many people bother to reason about their own lives, not some third parties' bullshit questions.
Is this your experience? Do you think 30% of your friends or family members can't answer this question? If not, do you think your friends or family are all better than the general population?
I'd look for explanations elsewhere. This was an online survey done by a company that doesn't specialize in surveys. The results likely include plenty of people who were just messing around, cases of simple miscommunication (e.g., asking a person who doesn't speak English well), misclicks, or not even reaching a human in the first place (no shortage of bots out there).
People often trip up on similar questions, anything to do with simple math. You know when they go out in the street and ask random people if 5 machines can produce 5 parts in 5 minutes, how long will it take for 100 machines.
Unlike the car question, where you can assume the car is at home and so the most probable answer is to drive, with the machines it gets complicated. Since the question doesn't specify if each machine makes one part or if they depend on each other (which is pretty common for parts production). If they are in series and the time to first part is different than time to produce 5 parts, the answer for 100 machines would be the time to produce the first part. Where if each machine is independent and takes 5 minutes to produce single part, the time would be 5 minutes.
Theory of mind won’t help you answering this question. It is obviously an underspecified question (at least in any contexts where you are not actively designing/thinking about some specific industrial process). As such theory of mind indicates that the person asking you is either not aware that they are asking an underspecified question, or are out to get you with a trick. In the first case it is better to ask clarifying question. In the second case your choosen answer depend on your temperament. You can play along with them, or answer an intentionally ridiculous answer, or just kick them in the shin to stop them messing with you.
There is nothing “mathematical” about any of this though.
>As such theory of mind indicates that the person asking you is either not aware that they are asking an underspecified question, or are out to get you with a trick.
Context would be key here. If this were a question on a grade school word problem test then just say 100, as it is as specified as it needs to be. If it's a Facebook post that says "We asked 1000 people this and only 1 got it right!" then it's probably some trick question.
If you think it's not specified enough for a grade school question, then I would challenge you to come up with a version that's specified rigorously enough for any sufficiently picky interviewee. (Hint: This is not possible)
>There is nothing “mathematical” about any of this though.
Finding the correct approach to solve a problem specified in English is a mathematical skill.
> If this were a question on a grade school word problem test then just say 100
Let me repeat the question again: "If 5 machines can produce 5 parts in 5 minutes, how long will it take for 100 machines?" Do you think that by adding 95 more machines they will suddenly produce the same 5 parts 95 minutes slower?
What kind of machine have you encountered where buying more of them the ones you already had started working worse?
> then I would challenge you to come up with a version that's specified rigorously enough for any sufficiently picky interviewee.
This is nonsense. The question is under specified. You don't demonstrate that something is underspecified by formulating a different well specified question. You demonstrate it by showing that there are multiple different potentially correct answers, and one can't know which one is the right one without obtaining some information not present in the question.
Let me show you that demonstration. If the machines are for example FDM printers each printing on their own a benchy each, then the correct answer is 5 minutes. The additional printers will just sit idle because you can't divide-and-conquer the process of 3d printing an object.
If the machines are spray paint applying robots, and the parts to be painted are giant girders then it is very well possible that the additional 95 paint guns make the task of painting the 5 girders quasi-instantaneous. Because they would surround the part and be done with 1 squirt of paint from each paint gun. This classic video demonstrates the concept: https://www.youtube.com/shorts/vGWoV-8lteA
This is why the question is under specified. Because both 1ms and 5 minutes are possibly correct answers depending on what kind of machine is the "machine". And when that is the case the correct answer is neither 1ms nor 5 minutes, but "please, tell me more. There isn't enough information in the question to answer it."
Note: I'm struggling to imagine a possible machine where the correct answer is 100 minutes. But I'm sure you can tell what kind of machine you were thinking of.
It's not theory of mind, it's an understanding of how trick questions are structured and how to answer one. Pretty useless knowledge after high school - no wonder AI companies didn't bother training their models for that
It's not a trick question. It has a simple answer. It's literally impossible to specify a question about real world objects without some degree of prior knowledge about both the contents of the question and the expectation of the questioner coming into play.
The obvious answer here is 100 minutes because it's impossible to perfectly encapsulate every real life factor. What happens if a gamma ray burst destroys the machines? What happens if the machine operators go on strike? Etc, etc. The answer is 100.
There are different kind of statements. Do you mean in a defined time interval or on average? Men are stronger than women. Does that mean there is no woman who is stronger then a man? You can't drive over 50 here. Does that mean it's physically impossible?
Well, these type of questions are looking for intelligent assumptions. Similar to IQ tests, you are supposed to understand patterns and make educated guesses.
> Do you think 30% of your friends or family members can't answer this question? If not, do you think your friends or family are all better than the general population?
That actually would be quite feasible. Intelligence seems to be heritable and people will usually find friends that communicate on their level. So it wouldn't be odd for someone who is smarter than the general population to have friends and family who are too.
My friend's and family all tell me they are above
average at work, yet most of them will tell me
they have coworkers who won't pay enough attention
to a question to answer it correctly.
>If not, do you think your friends or family are all better than the general population?
Since most people live in social bubbles that would be a very plausible case, especially on HN.
If you're a college educated developer, with a college educated wife, and smart, well educated children, perhaps yourselves the children of college educated parents, and your social circle/friends are of similar backgrounds, you'd of course be "better than the general population".
I don't think it's the lack of the ability to reason. The question is by definition a trick question. It's meant to trip you up, like '
"Could God make a burrito so hot that even he couldn't touch it?" Or "what do cows drink?" or "a plane crashes and 89 people died. Where were the survivors buried?"
I've seen plenty of smart people trip up or get these wrong simply because it's a random question, there's no stakes, and so there's no need to think too deeply about it. If you pause and say "are you sure?" I'm sure most of that 70% would be like "ohhh" and facepalm.
> which I suppose makes sense if 30% of people simply lack the ability to reason
You can't really infer that from survey data, and particularly from this question. A few criticisms that I came up with off the top of my head:
- What if the number were actually 60% but half guessed right and half guessed wrong?
- Assuming the 30% is a failure of reasoning, it's possible that those 30% were lacking reason at that moment and it's not a general trend. How many times have you just blanked on a question that's really easy to answer?
- A larger percentage than you expected maybe never went to a car wash or don't know what one is?
- Language barrier that leaked through vetting? (Would be a small %, granted)
- Other obvious things like a fraction will have lied just because it's funny, were suspicious, weren't paying attention and just clicked a button without reading the question.
I do agree that the question isn't framed particularly badly, however. I'm just focusing on cognitive impairment, which I don't think is necessarily true all of the time.
> And quite an elegant one at that. Without involving any government identifiers or anything else.
and this:
> There is a law, and there will be others to help improve it.
> Combined with the widespread implementation of TPM, this will become even more feasible.
Without coming to the obvious conclusion that the next step will be even further down the road of tying your every action back to a real, verified, trackable identity.
reply