Maybe it’s just me, but I am really skeptical about the DeepFake part - it’s a theoretically possible attack vector, but the only evidence they possibly could have to support this statement would be the employees testimony. Targeting a particular employee with the voice of a specific person this employee knows requires a lot of information and insider info.
Also, I think the article spends a lot of effort trying to blame Google Authenticator and make it seems like they had the best possible defense and yet attackers managed to get through because of Googles error. Nope, not even close. They would have had hardware 2FA if they were really concerned about security. Come on guys, it’s 2023 and hardware tokens are cheap. It’s not even a consumer product where one can say that hardware tokens hinder usability. It’s a finite set of employees, who need to do MFA certain times for certain services mostly using one device. Just start using hardware keys.
Hi, David, founder @ Retool here. We are currently working with law enforcement, and we believe they have corroborating evidence through audio that suggests a deepfake is likely. (Put another way, law enforcement has more evidence than just the employee's testimony.)
(I wish we could blog about this one day... maybe in a few decades, hah. Learning more about the government's surveillance capabilities has been interesting.)
I agree with you on hardware 2FA tokens. We've since ordered them and will start mandating them. The purpose of this blog post is to communicate that what is traditionally considered 2FA isn't actually 2FA if you follow the default Google flow. We're certainly not making any claims that "we are the world's most secure company"; we are just making the claim that "what appears to be MFA isn't always MFA".
Thanks for all this insight, this is why HN rules. What is your impression of law enforcement, everyone claims to reach out after an attack, but I've never seen follow up of sucessful law enforcement activity resulting in arrests or prosecution. Thanks again.
Law enforcement is currently attempting to ascertain whether or not the actor is within the US. If it's within the US, I (personally) believe there's a good chance they'll take the case on and presumably with enough digging, will find the attacker. (The people involved seem to be... pretty good.)
But if they're outside US (which is actually reasonably high probability, given the brazenness of the attack, and the fact that they're leaving a lot of exhaust [e.g. IP address, phone number, browser fingerprints, etc.]), then my understanding is that law enforcement is far less interested, since it's unlikely that even an identification of the hacker would lead to any concrete results (e.g. if they were in North Korea). (FWIW, the attack was not conducted via Tor, which to me implies that the actor isn't too worried about law enforcement.)
To give you a sense, we are in an active dialogue with "professionals". This isn't a "report this to your local police station" kind of situation.
FWIW engaging simultaneously with both the FBI and the USAO/DOJ and putting pressure on DOJ to act on the case typically results in better outcomes than just assuming the SA assigned is going to follow through and bugging them about it.
Most attacks like this use stolen credentials for VOIP providers, i.e. Twilio. It's likely the FBI quickly obtained a subpoena which produced a recording. The attacker may not have known the call was being recorded.
This is an example of Google sabotaging a techology it doesn't like. I'm not saying it is a conspiracy. But by thwarting TOTP like this, Google is benefiting.
I really like TOTP. It gives me more flexibility to control keys on my end. And you can still use a Yubikey to secure your private TOTP key. But you can also choose to copy your private key to multiple hardware tokens without needing anyone's permission. Properly used, you can get most of the benefit of FIDO2 with a lot more flexibility.
I actually recently deployed TOTP, and everyone was quite happy with it. But knowing that Google is syncing private keys around by default, I no longer think we can trust it.
Since you might have you delete the reply anyway, can I get a candid answer on why hardware 2FA tokens weren't a part of the default workflow before the incident? Was it concerns about the cost, the recovery modes, or was it just the trust in the existing approach?
One problem with hardware keys is still SaaS vendor support. There is a very narrow path for effective enforcement: require SSO, then require hardware tokens at the SSO level. But even that is difficult to truly enforce, because the IdP often has "recovery" mechanisms that grant access without a hardware key. Google is also guilty of not adding a claim to the OIDC/SAML response verifying that a hardware token was used to login, so vendors cannot be configured to decide to reject the login because it didn't use a hardware token.
If you have any vendors without SSO (like GitHub, because it's an Enterprise feature), you're lucky if they support hardware tokens (cool, GitHub does) and even luckier if their "require 2FA" option (which GitHub has, per organization) allows you to require hardware keys (which GitHub does not).
Distributing hardware keys to employees is one thing. Mandating them is quite another.
Yubico's U2F security key (good for FIDO2, WebAuthn, etc.) is $25, each member of your organization needs only 1 key (if they lose their key, they can get another one from IT, which can remove the old key and enroll the new one for them), with a handful of IT personnel possibly having more than 1 key for backup (this is less necessary when a group of IT holds admin permissions, as they serve as key backups for each other). $25/key amortizes out to well under $1/month considering that keys will last for years and can be transferred from one employee to the next when an employee leaves the company, and is of course usable for any vendor that supports hardware keys.
Much, much cheaper than $21/user/month for GitHub Enterprise. I'm not sure what universe you live in where buying hardware keys is expensive compared to Enterprise licensing?
The universe I'm in is the one where you have to staff the IT department and they have to support the device. The IT department costs way more than $21/month.
You have a valid point that we need SaaS vendor support for SAML/whatever, but GitHub, specifically, supports SSO. Yeah, it costs money to get that feature, but security doesn't just happen. Security is expensive, but it's more expensive not to have it. In this case, it costs $21/user/month. If that's too expensive to protect the source code of the company's product, that says a lot about the company.
I've personally worked for multiple startups where rolling out hardware keys did not require making additional IT hires (we're talking about companies smaller than ~50 people). Perhaps at BigCo size, you end up needing dedicated personnel to support a hardware key rollout at that scale, but at that scale you have the budget for GitHub Enterprise anyway so the point about pricing is moot; at BigCo size there is also even more of an incentive to roll out hardware keys since you're that much more likely to get spear phished.
Also, I think the article spends a lot of effort trying to blame Google Authenticator and make it seems like they had the best possible defense and yet attackers managed to get through because of Googles error. Nope, not even close. They would have had hardware 2FA if they were really concerned about security. Come on guys, it’s 2023 and hardware tokens are cheap. It’s not even a consumer product where one can say that hardware tokens hinder usability. It’s a finite set of employees, who need to do MFA certain times for certain services mostly using one device. Just start using hardware keys.