Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not an insane thing to say.

You can boil LLM's down to "next token predictor". But that's like boiling down the human brain to "synapses firing".

The point that OP is making I think, is that we don't understand how "next token prediction" leads to more emergent complexity.





The only thing we don't fully understand is how the ELIZA effect[0] has been known for 60 years yet people keep falling for it.

[0] https://en.wikipedia.org/wiki/ELIZA_effect


> The only thing we don't fully understand is

It seems clear you don't want to have a good faith discussion.

It's you claiming that we understand how LLM's work, while the researchers who built them say that we ultimately don't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: