How do we know that the brain is a statistical model of the world? It sounds like explaining an unknown phenomenon using the technology du jour - just 10/20 years ago, the brain was a computer.
This touches on a dichotomy that has fascinated me for decades, from the very beginning of my interest in AI.
One side of the dichotomy asserts that "if it walks like a duck..." that is, if a computer appears to be intelligent to us, then it must be intelligent. This is basically the Turing Test crowd (even though Turing himself didn't approve of the Turing Test as an actual test of AI).
On the other side, you have people who assert that the human mind is really just a super-complicated version of "X", where "X" is whatever the cool new tech of the day is.
I have no conclusions to draw from this sort of thing, aside from highlighting that we don't know what intelligence or consciousness actually are. I'm just fascinated by it.
The general notion is called "lumpers" and "splitters".
From the perspective of software, the lumpers are pretty much always wrong except for when they get a lucky guess. Think of a pointy-haired boss who weaponizes his wishful thinking with a brutal dismissal of all implementation details and imposes ignorantly firm deadlines, or an architecture astronaut who writes and forces upon everyone cruel interfaces and classes that are thoroughly out of touch with reality.
As they say: "it's more easy to lump splits than split lumps". The people who insist the statistical models have emergent behavior, or even worse, equate them with human brains are "lumpers" who lack imagination and have no desire to truly understand and model these things. They naively seek out oversimplifications and falsely believe they're applying Occam's Razor, but they're actually just morons. "Splitters" are by their very definition always technically correct, but create complex distinctions that either represent much deeper knowledge than necessary, or hallucination. Either way, both types are needed, and of course, society values the lumpers far more for essentially playing the lottery with their reputations by telling people what they want to hear.
I wouldn't say the brain is magic, just that we still don't know what consciousness and intelligence is. Could the complex emergent behaviour we call intelligence emerge from a statistical model? Maybe. Can we gain more insights on what intelligence is by studying these models? Definitely. On the other hand — Are there limits to large language models' capabilities that we haven't reached yet?
I don’t think we know that. The point of my comment is to poke a bit at human exceptionalism. I think we’re going to see something that’s hard to deny is intelligent come out of a combination of a world model and an RL agent within the next decade. But I’m sure some will try to keep moving the goalposts.