Hacker Newsnew | past | comments | ask | show | jobs | submit | MattDaEskimo's commentslogin

There are solutions, the problem is almost always discipline.

I don’t know what this means. Discipline is good, but I think you need to have good tools/primitives in place to help people exercise discipline.

(The classic example being passwords: we wouldn’t need MFA is everybody just “got good” and used strong/unique passwords everywhere. But that’s manifestly unrealistic, so instead we use our discipline budget on getting people to use password managers and phishing-resistant MFA.)


Really? You don't know the difference between having a door lock, and using it?

MFA is typically enforced by organizations, forcing discipline. Individual usage of MFA is dramatically lower


What kind of outcome results from misuse? Clearly a hammer's misuse has very little in common with a global, hivemind network used in high-stake campaigns.

Now, if I misused a hammer and it hurt everyone's thumb in my country, then maybe what you said would have some merit.

Otherwise, I'd say it's an extremely lazy argument


Same reason why Wikipedia deals with so many people scraping its web page instead of using their API:

Optimizations are secondary to convenience


Accountability then


Anticipating modes of failure, creating tooling to identify and hedge against risks.


If we could do this it would have been done already. Outsourced devs would be ubiquitous.


What's also wild is this being the first comment to mention it!

Although there is an underlying truth: using LLMs for large-context tasks like coding is still extremely expensive.


This was a brave, heartwarming read. Thank you to the teams


Makes me wonder the ratio between LLM commenters versus those aligning with an LLMs syntax.

Not sure which is scarier


Surely safety does not exclusively mean guardrails, but the philosophy and ethics instilled during training?


The ethics are exactly what the DoD is complaining about. They want any legal action to not be obstructed by guardrails.


Forget legal, they want any action to not be obstructed by guardrails.


What can those do from a separate country, who unfortunately had their identity verified through Persona (LinkedIn in my case).


Organize in your country and advocate for data deletion jubilees, organize in your country to champion new taxes against US digital services, organize in your country to advocate for homegrown solutions over US tech.

If you aren't actively organizing you aren't going to accomplish anything.

Remember that people power trumps monetary power, but you have to commit for people power to work.


> advocate for homegrown solutions over US tech.

Some sweet irony about this btw.


Why? Every country on Earth is capable of creating and maintaining software. There is nothing unique about America or Silicon Valley (outside of the massive amounts of corporate welfare), devs can be found anywhere and who better to write software for local citizens than the local citizens themselves?

We know how useful open source software is, there's no reason why this can't be replicated across the planet.


Not because they cannot do it, but because why they're doing it, which in turn becomes what they're doing. America is being perceived as isolationist, so countries solve that by becoming isolationist about what software they use, whether its open source or not is kind of irrelevant, though in several cases the software will primarily be focused on the countries own language.

The better alternative in my eyes is to contribute to existing open source, and only if the US becomes hostile against this, fork said code and move on.


From the blog post I've recently read; https://thelocalstack.eu/posts/linkedin-identity-verificatio...

1. Request your data. Email idv-privacy@withpersona.com or privacy@withpersona.com. Under GDPR, they have 30 days to respond.

2. Request deletion. The verification is done. LinkedIn already has the result. There is no reason for Persona to keep your passport scan and facial geometry on their servers. Ask them to delete it.

3. Contact their DPO. dpo@withpersona.com — that’s their Data Protection Officer. If you want to object to them using your documents as AI training data under “legitimate interests,” this is where you do it.

4. Think twice before verifying. That blue badge might not be worth what you’re trading for it. A checkmark is cosmetic. Biometric data is forever.


As heavily discussed here 3 days ago (Persona is the same company LinkedIn uses for their ID verification process):

I verified my LinkedIn identity. Here's what I handed over

https://news.ycombinator.com/item?id=47098245

1.4K+ points, 490+ comments


Just requested deletion through this form: https://withpersona.com/dsar


> 1. Request your data. Email idv-privacy@withpersona.com or privacy@withpersona.com. Under GDPR, they have 30 days to respond.

They just won't respond, then you can wait for 4+ years and nothing will happen to them. [0]

[0] https://noyb.eu/en/project/dpa/dpc-ireland


I'm very confused here. The monthly plans are meant to be used inside of Google's walled garden, but people are somehow able to capture (?) and re-use the oAuth token?

Regardless, I thought it was pretty obvious that things like OpenClaw require an API account, and not a subsidized monthly plan.


Exactly, OpenClaw (or I think possibly an addon/extension or unofficial method) is allowing Googles Antigravity authentication to connect the app. This allows for 'unlimited' calls through Antigravity models with a subscription, instead of the proper Gemini/Google AI Studio API key method (charged per million tokens)

API usage can get very high for automatic operations, especially with apps like Kilo/Roo/Cline, and now with OpenCode/OpenClaw. I often blast through $10-20 in a single day of just regular OpenCode usage through OpenRouter

If I could pay a subscription and get near unlimited use (with rate limits), of course I'd do that, but not like this. I'm pretty sure Antigravity has ToU somewhere that indicates it's only allowed for use in Antigravity and nowhere else, since I've seen other threads on this happening: https://github.com/jenslys/opencode-gemini-auth/issues/50


>and get near unlimited use (with rate limits)

But they're not near unlimited though. They're just hidden limits.


Sure. But a zero strike getting kicked out of your Google account is a grotesque evil.

Edit: maybe it's not the whole account? https://news.ycombinator.com/item?id=47116330


No - use OpenAI, no problem. OpenAI wins here big time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: