Hacker Newsnew | past | comments | ask | show | jobs | submit | encrux's commentslogin

I think this is actually the correct way to move forward.

We should be able to verify facts about people on the internet without compromising personal data. Giving platforms the ability to select specific demographics will, in my view, make the web a better place. It doesn’t just let us age restrict certain platforms, but can also make them more authentic. I think it’s really important to be able to know some things to be true about users, simply to avoid foreign election interference via trolling, preventing scams and so much more.

With this, enforcement would also be increasingly easy: Platforms just have to prove that they’re using this method, e.g. via audit.


Which, ironically, is written in rust

Well, Python is largely written in C, so there's that.

Very much depends on what you want to do.

The fact that a language model can „reason“ (in the LLM-slang meaning of the term) about 3D space is an interesting property.

If you give a text description of a scene and ask a robot to perform a peg in hole task, modern models are able to solve them fairly easily based on movement primitives. I implemented this on a UR robot arm back in 2023

The next logical step is, instead of having the model output text (code representing movement primitives), outputting tokens in action space. This is what models like pi0 are doing.


I mean semantically language evolved as an interpretation for the material world, so assuming that you can describe a problem in language, and considering that there exists a solution to said problem that is describable in language, then I'm sure a big enough LLM could do it... but you can also calculate highly detailed orbital maps with epicycles if you just keep adding more... you just don't because it's a waste of time and there's a simpler way.

The latter part is interesting. I'm not sure how the performance of one of those would be once they are working well, but my naive gut feeling is that splitting the language part and the driving part into two delegates is cleaner, safer, faster and more predictable.


note that the control systems you were talking about before (i.e. PID) would probably take hold pretty directly in a tiny network, and exactly because of that limitation, be far less likely to contain 'hallucinations'. object avoidance and path planning are likely similar.

since this is a limited and continuous domain, its a far better one for neural training than natural language. I guess this notion that a language model should be used for 3d motion control is a real indicator about the level of thought going into some of these applications.


> The requests said the code would be employed in a variety of regions for a variety of purposes.

This is irrelevant if the only changing variable is the country. From a ML-perspective adding any unrelated country name shouldn’t matter at all.

Of course there is a chance they observed an inherent artifact, but that should be easily verified if you try this same exact experiment on other models.


> From a ML-perspective adding any unrelated country name shouldn’t matter at all.

It matters to humans, and they've written about it extensively over the years — that has almost certainly been included in the training sets used by these large language models. It should matter from a straight training perspective.

> but that should be easily verified if you try this same exact experiment on other models.

Of course, in the real world, it's not just a straight training process. LLM producers put in a lot of effort to try and remove biases. Even DeepSeek claims to, but it's known for operating on a comparatively tight budget. Even if we assume everything is done in good faith, what are the chances it is putting in the same kind of effort as the well-funded American models on this front?


Except it does matter.

Because Chinese companies are forced to train their LLMs for ideological conformance - and within an LLM, everything is entangled with everything.

Every bit of training you do has on-target effects - and off-target effects too, related but often unpredictable.

If you train an LLM to act like a CCP-approved Chinese nationalist in some contexts (i.e. pointed questions about certain events in Tiananmen Square or the status of Taiwan), it may also start to act a little bit like a CCP-approved Chinese nationalist in other contexts.

Now, what would a CCP-approved Chinese nationalist do if he was developing a web app for a movement banned in China?

LLMs know enough to be able to generalize this kind of behavior - not always, but often.


Nothing about this was quick. 2015 was the first time we had an increase in authoritarianism in the public debate.

Project 2025 was announced in 2023.


The patriot act was a pretty major increase in authoritarianism in 2001. We've been on this particular slippery slope since the start of the cold war.


Fair point. Just noting that we're in the middle of a blitz.


> We should eliminate anonymity online.

for certain platforms. IMO platforms should be able to decide for themselves whether they want the option to have people verify themselves via ID or not.

It's the government's job to provide this service


Officials are usually elected because the people trust them (yes von der Leyen indirectly elected, but that's besides the point here). In geopolitical decisions for example, the people can't and shouldn't be able to know everything.


Trust is a process, not a static state. In order to trust someone you don't personally know, you usually need some level of transparency. Perhaps not total (as you mention, there is genuine need for some secrecy in politics), but surely much higher than the required level of transparency of random citizens.


> In geopolitical decisions for example, the people can't and shouldn't be able to know everything.

Why?


Possible, sure. In reality it's unlikely though.

Unless you still believe in the american dream, I'm pretty sure we can agree that the increase in housing prices makes it exceedingly difficult for young people to buy a house without a significant inheritance.


Surely at some point the market will just regulate itself and amazon will have to improve working conditions to keep operations running... Right? Isn't that how capitalism is supposed to work?


That’s what’s happening. Walmart is paying warehouse workers $25/hr and Amazon is losing workers to them.


This seems surprisingly reasonable for my personal experience, so I'm gonna use it as a general rule now /s


Inquiry like this should be left to the people who experience it, and of course all they have is their personal experience and then opportunities to share their personal experience.

Do we really need some outside authority to tell us what we’re seeing and how to talk about it?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: