But with LLMs is there really more to understand? They’re just large functions that take numerical input and transform it into numerical output based on trained weights. There is nothing behind the scenes doing things we don’t understand. The magic is in the weights, and we know how to create these based on training data.
Regarding the car, if you know how to build a car, you understand how a car works. A driver is more like someone using and llm, not a developer able to create an llm.
> But with LLMs is there really more to understand?
Yes! loads! (: I want to be able to say statements like "this model will never ask the user to kill themselves" and be confident, but I can't do that today, and we don't know how. Note that we do know how to prove similar statements for regular software.
Regarding the car, if you know how to build a car, you understand how a car works. A driver is more like someone using and llm, not a developer able to create an llm.