You can boil LLM's down to "next token predictor". But that's like boiling down the human brain to "synapses firing".
The point that OP is making I think, is that we don't understand how "next token prediction" leads to more emergent complexity.
[0] https://en.wikipedia.org/wiki/ELIZA_effect
reply
It seems clear you don't want to have a good faith discussion.
It's you claiming that we understand how LLM's work, while the researchers who built them say that we ultimately don't.
You can boil LLM's down to "next token predictor". But that's like boiling down the human brain to "synapses firing".
The point that OP is making I think, is that we don't understand how "next token prediction" leads to more emergent complexity.