The first letter was the recorder used for initial recording, say a Studer A800 as an example of an analog multitrack or DASH as an example of a digital one).
The second letter was the recorder for the mixdown, i.e. usually some 2-channel system like an analog ATR-102 or Studer A80 or a digital DAT.
The third letter was the recorder for the master, which for CD by definition was always digital. In the early days usually a Sony U-matic, which funnily enough was an analog video tape format which got reused for digital audio (and is the reason for the odd 44.1 kHz sampling rate of the CD).
Edit:
The code was actually always considered a bit meaningless.
For example, you could record on a digital DASH, but mix on an analog SSL console and print the mix to a digital recorder. That would have been a DDD CD.
On the other hand, you could record on an analog A820, mix on a digital Studer desk, print the mix on an analog A80 and that would have been a AAD CD.
So, two codes indicating "pure" digital or "pure" analog, even though both processes used both technologies.
Or record on a ADAT and mix on a Yamaha 02/R, which would have been DDD but probably sounded worse than the AAD recorded on a Studer analog tape ;)
To add, what I wrote in parent is very brief and superficial. There is at least one comment here with more detail about when they can be liable, and why Dyson was liable in this case.
Google chose "understanding" in that context, because the relevant AI/ML task is called "Natural language understanding". But that term is an aspiration. It's the problem of trying to reveal the "meaning" of text data (language) as in making sense of the symbols with computers.
Just because Transformers work well on the "Natural language understanding" task in AI, doesn't mean that a Transformer actually "understands" language in the human sense.
The task is language understanding. The tool is amazing. Pianos are amazing. The task is to create music. The process is to transform movement to sound. They don't understand music.
And what does it mean to feel sorry? Beyond fallible and imprecise human introspective notion of "sorry", that is. A definition that can span species and computing substrates. A deanthropomorphized definition of "sorry", so to speak.
They generate text based on quite a large context, including hidden prompts we don’t see and their weights are distorted heavily by training. So I think there’s a lot more than a simple probability of word x coming next. That makes ‘predict next word’ a reductive summary IMO.
I do not personally feel it resembles thinking or reasoning though and really object to that framing because it is misleading many people.
I may be using the wrong terms, my impression was:
1. Weights in the model are created by ingesting the corpus
2. Techniques like reinforcement learning, alignment etc are used to adjust those weights before model release
3. The model is used and more context injected which then affects which words it will choose, though it is still heavily biased by the corpus and training.
That could be way off base though, I'd welcome correction on that.
The point I was trying to make though was that they do more than predict next word based on just one set of data. Their weights can encode entire passages of source material in the training data (https://arxiv.org/abs/2505.12546), including books, programs. This is why they are so effective at generating code snippets.
There are a lot of inputs nowadays and a lot of stages to training. So while I don't think they are intelligent I think it is reductive to call them next token predictors or similar. Not sure what the best name for them is, but they are neither next word predictors nor intelligent agents.
That extended explanation is more accurate, yes. I'd call your points 1 and 2 both training under the definition "anything that adjusts model weights is training". There are multiple stages and types of training. Right now AFAIK most (all) architectures then fix the weights and you have non-weight-affecting steps like the system prompt, context, etc.
You're right that the weights can enable the model to memorize training data.
This was one of the most outrageous data grab in the past years. They replaced the completely working simple Mail app, which I used until that point, with this garbage, and I was just lucky that I paid attention, and I stopped for a sec what is that warning which tells you that they grab literally all of your emails.
reply