I think we miss the point in trying to give it qualities that it (and we) doesn't have. Most of our basis of operation actually comes from school - where we learn initially learn recipes. We are taught rules of reasoning which are recipes. It's a duck because it looks like a duck and quacks like a duck. The answer is 5 because the set of 2 objects plus the set of 3 object is the same as 5 objects. Here are your multiplication tables - memorize them.
A whole bulk of what we consider intelligence in ourselves is matching situational patterns to thing we've seen. Now I'm not going to say that we aren't intelligent, but it will say that the modus of how we are training LLMs through rote memorization isn't a whole lot different for achieving the same result.
A fair amount of intellectual energy is being burned trying to disprove intelligence, but to paraphrase a famous quote "a lot of problems are consider AI problems until we do them and then it isn't considered AI anymore". How we eventually arrive at something that does or mimics intelligence is likely not to look like what we use, just like aircraft flight doesn't look like flapping wings.
Is an LLM intelligent - probably not. Does it make mistakes, most assuredly. But is it able to do some tasks that we consider as intelligent human tasks better than humans with fewer mistakes? Definitely. We need to keep THAT in mind as we explore this space and the discussion of how we quantify what is true intelligence.
Detective Spooner: Can a machine make a great work of art?
Sonny: Can you?