Hacker Newsnew | past | comments | ask | show | jobs | submit | steve1977's commentslogin

We all know that a thousand parentheses would be better metric.

> Who is using native UI in 2026?

Swift. Which is similar to Rust in some ways actually.


Close, but not quite.

The first letter was the recorder used for initial recording, say a Studer A800 as an example of an analog multitrack or DASH as an example of a digital one).

The second letter was the recorder for the mixdown, i.e. usually some 2-channel system like an analog ATR-102 or Studer A80 or a digital DAT.

The third letter was the recorder for the master, which for CD by definition was always digital. In the early days usually a Sony U-matic, which funnily enough was an analog video tape format which got reused for digital audio (and is the reason for the odd 44.1 kHz sampling rate of the CD).

Edit:

The code was actually always considered a bit meaningless.

For example, you could record on a digital DASH, but mix on an analog SSL console and print the mix to a digital recorder. That would have been a DDD CD.

On the other hand, you could record on an analog A820, mix on a digital Studer desk, print the mix on an analog A80 and that would have been a AAD CD.

So, two codes indicating "pure" digital or "pure" analog, even though both processes used both technologies.

Or record on a ADAT and mix on a Yamaha 02/R, which would have been DDD but probably sounded worse than the AAD recorded on a Studer analog tape ;)


> Sony U-matic

3/4" tape and was the only tape format that had the take up reel on the left.


So not much has changed really?

Yep pretty much no difference between 1800s chattle slavery, and having to work in an office.

It's a demonstration of power. Which is exactly why it needs fighting against, because these people (i.e. Dyson) must not have power.

Not actually Dyson, one of their parts suppliers.

The significance of this ruling is that a British company can be held liable for its suppliers' treatment of workers in anther country.


To add, what I wrote in parent is very brief and superficial. There is at least one comment here with more detail about when they can be liable, and why Dyson was liable in this case.

But why only demonstrate power over 12 people and not the alleged 1200+ that work there?

Tell me when Justice condemns a corrupt billionaire to piss himself.

Windows NT is not VMS! Trust me!

Had to Google this but I do love a deep cut reference!

https://www.itprotoday.com/server-virtualization/windows-nt-...


From what I understand, it's more like "input is 1, 3, 5, 7" so "output is likely to be 9".

Understanding would be a bit generous of a term for that I guess, but that also depends on the definition of understanding.


Id really invite people to read the google blog post. https://research.google/blog/transformer-a-novel-neural-netw...

Google chose the word understanding.


Google chose "understanding" in that context, because the relevant AI/ML task is called "Natural language understanding". But that term is an aspiration. It's the problem of trying to reveal the "meaning" of text data (language) as in making sense of the symbols with computers.

Just because Transformers work well on the "Natural language understanding" task in AI, doesn't mean that a Transformer actually "understands" language in the human sense.


Thanks for the link, I will read it. But keep in mind that Google wants to sell us something.

A the time, it was a free language translation tool. You weren't paying for transformers in 2017.

True, but that doesn't mean that Google did not already have intentions to monetize it if possible.

You would think, wouldn’t you?

And yet they waited until ChatGPT was a thing and threw Bard together overnight in response.


Fair point ;)

The task is language understanding. The tool is amazing. Pianos are amazing. The task is to create music. The process is to transform movement to sound. They don't understand music.

I think much along the same lines. LLMs are probably even just a part of the language center.

And of course they also miss things like embodiment, mirror neurons etc.

If an LLM makes a mistake, it will tell you it is sorry. But does it really feel sorry?


> But does it really feel sorry?

And what does it mean to feel sorry? Beyond fallible and imprecise human introspective notion of "sorry", that is. A definition that can span species and computing substrates. A deanthropomorphized definition of "sorry", so to speak.


> Predict the next word is a terrible summary of what these machines do though, they certainly do more than that

What would that be?


They generate text based on quite a large context, including hidden prompts we don’t see and their weights are distorted heavily by training. So I think there’s a lot more than a simple probability of word x coming next. That makes ‘predict next word’ a reductive summary IMO.

I do not personally feel it resembles thinking or reasoning though and really object to that framing because it is misleading many people.


> their weights are distorted heavily by training

What does that even mean? Their weights are essentially created by training. There aren't some magic golden weights that are then distorted.


I may be using the wrong terms, my impression was:

1. Weights in the model are created by ingesting the corpus

2. Techniques like reinforcement learning, alignment etc are used to adjust those weights before model release

3. The model is used and more context injected which then affects which words it will choose, though it is still heavily biased by the corpus and training.

That could be way off base though, I'd welcome correction on that.

The point I was trying to make though was that they do more than predict next word based on just one set of data. Their weights can encode entire passages of source material in the training data (https://arxiv.org/abs/2505.12546), including books, programs. This is why they are so effective at generating code snippets.

Also text injected at the last stage during use has far less weight than most people assume (e.g. https://georggrab.net/content/opus46retrieval.html) and is not read and understood IMO.

There are a lot of inputs nowadays and a lot of stages to training. So while I don't think they are intelligent I think it is reductive to call them next token predictors or similar. Not sure what the best name for them is, but they are neither next word predictors nor intelligent agents.


That extended explanation is more accurate, yes. I'd call your points 1 and 2 both training under the definition "anything that adjusts model weights is training". There are multiple stages and types of training. Right now AFAIK most (all) architectures then fix the weights and you have non-weight-affecting steps like the system prompt, context, etc.

You're right that the weights can enable the model to memorize training data.


Alignment scrubs the underlying raw output to be socially acceptable. It's an artificial superego.

I was under the impression it is a part of training which adjusts weights before release.

Are you saying it is a separate process which scrubs output before we see it?


The best example is probably the new "Outlook", and I put that name in quotes intentionally.

In case anyone is not aware:

20231109 https://news.ycombinator.com/item?id=38212453 Windows 11 Update 23H2 is stealing users' IMAP credentials (666 points, 278 comments)

> the new Outlook is a thin wrapper around the cloud version, so the IMAP sync happens in the cloud, not locally


This was one of the most outrageous data grab in the past years. They replaced the completely working simple Mail app, which I used until that point, with this garbage, and I was just lucky that I paid attention, and I stopped for a sec what is that warning which tells you that they grab literally all of your emails.

Btw, just before that I found this page regarding Edge, and this is why I paid more attention to these things: https://learn.microsoft.com/en-us/legal/microsoft-edge/priva...

That list is way too long for my taste, and it really indicated me that Windows became completely adversarial.


[flagged]


“Diverse”? Wanna expand on that one, buddy? You think you’re being subtle?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: