Are you intelligent or just a bunch of cells? Given that I can query it for all sorts of information that I don’t know, I would consider LLMs to, at the very least, contain and present intelligence…artificially.
I can query Wikipedia or IMDB for all sorts of information I don't know. I wouldn't consider the search box of either site to be "intelligent", so I don't know "query it for all sorts of information" is a generally good rubric for intelligence.
How do we know that the brain is a statistical model of the world? It sounds like explaining an unknown phenomenon using the technology du jour - just 10/20 years ago, the brain was a computer.
This touches on a dichotomy that has fascinated me for decades, from the very beginning of my interest in AI.
One side of the dichotomy asserts that "if it walks like a duck..." that is, if a computer appears to be intelligent to us, then it must be intelligent. This is basically the Turing Test crowd (even though Turing himself didn't approve of the Turing Test as an actual test of AI).
On the other side, you have people who assert that the human mind is really just a super-complicated version of "X", where "X" is whatever the cool new tech of the day is.
I have no conclusions to draw from this sort of thing, aside from highlighting that we don't know what intelligence or consciousness actually are. I'm just fascinated by it.
The general notion is called "lumpers" and "splitters".
From the perspective of software, the lumpers are pretty much always wrong except for when they get a lucky guess. Think of a pointy-haired boss who weaponizes his wishful thinking with a brutal dismissal of all implementation details and imposes ignorantly firm deadlines, or an architecture astronaut who writes and forces upon everyone cruel interfaces and classes that are thoroughly out of touch with reality.
As they say: "it's more easy to lump splits than split lumps". The people who insist the statistical models have emergent behavior, or even worse, equate them with human brains are "lumpers" who lack imagination and have no desire to truly understand and model these things. They naively seek out oversimplifications and falsely believe they're applying Occam's Razor, but they're actually just morons. "Splitters" are by their very definition always technically correct, but create complex distinctions that either represent much deeper knowledge than necessary, or hallucination. Either way, both types are needed, and of course, society values the lumpers far more for essentially playing the lottery with their reputations by telling people what they want to hear.
I wouldn't say the brain is magic, just that we still don't know what consciousness and intelligence is. Could the complex emergent behaviour we call intelligence emerge from a statistical model? Maybe. Can we gain more insights on what intelligence is by studying these models? Definitely. On the other hand — Are there limits to large language models' capabilities that we haven't reached yet?
I don’t think we know that. The point of my comment is to poke a bit at human exceptionalism. I think we’re going to see something that’s hard to deny is intelligent come out of a combination of a world model and an RL agent within the next decade. But I’m sure some will try to keep moving the goalposts.
Its interesting to see what it thinks about some ideas, like I ask, what 5 companies are best at marketing. My goal here is to be hypercritical of the companies it says because they are masters at manipulation. GPT3.5 was awful and confused advertising and marketing. GPT4 was perfect (Apple, Nike, Coke, Amazon, P&G)
As much as chatgpt doesnt want to give you answers because the fuzziness, it has the ability to make judgements on things like "This is the best" or "This is the worst".
In this examples, it likely took that those companies are often praised about their marketing in the same sentence marketing is mentioned.
LLMs don't repeat text its seen before, it links words/tokens/phrases that are related. Its prediction, but the prediction isnt just copypasting a previous webpage.
Have you use chatgpt yet? I wouldn't delay. Heck you are here on HN, you basically have a responsibility to test it.
Well, one clear thing about GPT4 that isn't intelligent is that it doesn't learn in situ. Knowledge has to be added to it via an external process. The prompt does allow it to condition further output based on "new" information but that isn't learning. Another thing GPT4 has trouble with is generalizing knowledge. While it is certainly able to generalize to a degree (more or less it is able to apply patterns in the training data from one domain to other domains) if you ask it to generalize to things not well represented in the training data but nevertheless obvious from the conceptual underpinnings thereof it fails. I see this frequently with complicated functional/function level programming. GPT4 gets hopelessly confused when you ask it about non-trivial functions which return or manipulate other functions, even though conceptually there is nothing confusing about it and, in fact, if you ask it about functions as first class objects, it can answer with reasonable text.
Thus, GPT4 can appear to have knowledge in the sense of generating text indicating such, but fail to use that knowledge. This is the most compelling indication to me of limited or total lack of intelligence. I believe that the vast majority of GPT4's "capabilities" amount to memorization and permutation, not the formulation of accurate models of things.
This is exactly it for me.