Thinking is when biological brains create new ideas from old thoughts and inputs.
LLMs can take old ideas and inputs, as text, and create text that turns into useful new ideas when a human reads them. The new LLMs actually do this in a meaningful way, bullshitting far less than older LLMs, and actually producing meaningful criticism and suggestions. The reader does not do the thinking needed to create the new idea. They just decode the text into the new idea.
So either actually meaningful new ideas can be created without thinking, or the LLM is doing a kind of artificial thinking.
Critics will say that we may as well argue that bones can think, because casting bones in a cup influences the prediction in a soothe sayer's mind. But the words created by LLMs - especially higher grade ones - are much more meaningful and thought like than bones in a cup. They can clearly advance a line of thinking in a way that is analogous to how a brain advances a line of thinking.
Therefore, it's reasonable to say LLMs are capable of limited artificial thought. They can effectively process thoughts represented externally to humans.
Maybe we should call this co-thinking, because it still requires a human as the final mile of the loop, to turn the result back into a real thought.