These "LLMs cannot be AGI if they don't have a function to get today's date" remind me of laypeople reviewing phone cameras by seeing which camera's saturation they like more.
It's absurd, whether an LLM has access to a function isn't a property of the LLM itself, therefore it's irrelevant, but people use it because LLMs make them feel bad somehow and they'll clutch at any straw.
> It's absurd, whether an LLM has access to a function isn't a property of the LLM itself
But the LLM coming up with another answer when it lacks that function is a property of the LLM itself. It lacks the kind of introspection that would be required to handle such questions.
Now current date is so common that you see a lot of trained responses for that exact question, but LLMs makes similar mistakes to all sorts of questions that they have no way of answering. But even when trained LLM still do make mistakes like that, since for example stories and such often say the date is something else than the date it was written etc. A human that is asked knows this isn't a book or a science report, but an LLM doesn't.
If you ask someone with Alzheimer's what year it is, you'll get a confident answer of 1972. Would you class people suffering from Alzeimer's as non-intelligent?
> Would you class people suffering from Alzeimer's as non-intelligent?
Yes, I don't think they are generally intelligent any more, for that you need to be able to learn and remember. I think they can have some narrow intelligent though based on stuff they have learned previously.
No straws to clutch here. I've made such and other functions available to LLMs in order to implement some great functionality that would otherwise not have been possible. And they do a relatively good job. One of the issues is that they're not really reliable / deterministic. What the LLM does / is capable of today might not be what it does tomorrow or with just ever so slightly different context added via the prompts used by the user today vs. yesterday.
You are correct in that the date thing by itself, if that was the only thing would not be such a big deal.
But the date thing and confidently telling me the wrong date is a symptom and stand-in example of what LLMs will do in way too many situations and regular people don't understand this. Like I said, not very intelligent / confident people will do the same thing. But with people you generally have a "BS meter" and trust level. If you ask a random stranger on the street what time it is and they confidently tell you that it's exactly 11:20:32 a.m. without looking at their watch/phone, you know it's 99.99% BS. (again, just a stand in example, replace with 'Give me timeline of the most important thing that happened during WWII on a day by day basis' or whatever you can come up with). Yet people trust the output of LLMs with answers to questions where the user has no real way to know where on the BS meter this ranks. And they just believe them.
Happened to me today at work. LLM very confidently made up large swaths of data because it "figured out" that the test env we had was using the Star Trek universe characters and objects for test data. Had no base in reality and it basically had to ignore almost all the data that we actually returned from one of these "Get the current date" type functions we make available to it.
It's absurd, whether an LLM has access to a function isn't a property of the LLM itself, therefore it's irrelevant, but people use it because LLMs make them feel bad somehow and they'll clutch at any straw.