Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When you type a calculation into a calculator and it gives you an answer, do you say the calculator thinks of the answer?

An LLM is basically the same as a calculator, except instead of giving you answers to math formulas it gives you a response to any kind of text.



My hope was to shift the conversation away from people disagreeing about words to people understanding each other. When a person reads e.g. "an LLM thinks" I'm pretty sure that person translates it sufficiently well to understand the sentence.

It is one thing to use anthropocentric language to refer to something an LLM does. (Like I said above, this is shorthand to make conversation go smoother.) It would be another to take the words literally and extend them -- e.g. to assign other human qualities to an LLM, such as personhood.


In what ways do humans differ when they think?


Humans think all the time (except when they’re watching TV). LLMs only “think” when it is streaming a response to you and then promptly forgets you exist. Then you send it your entire chat and it “auto-fills” the next part of the chat and streams it to you.


What are we debating? Does anyone know?

One claim seems to be “people should cease using any anthropocentric language when describing LLMs”?

Most of the other claims seem either uncontested or a matter of one’s preferred definitions.

My point is more of a suggestion: if you understand what someone means, that’s enough. Maybe your true concerns lie elsewhere, such as: “Humanity is special. If the results of our thinking differentiate us less and less from machines, this is concerning.”


I don't need to feel "special". My concerns are around the people who (want to) believe their statistical models to be a lot more than they really are.

My current working theory is there's a decent fraction of humanity that has a broken theory of mind. They can't easily distinguish between "Claude told me how it got its answer" and "the statistical model made up some text that looks like reasons but have nothing to do with what the model does".


> ... a decent fraction of humanity ... can't easily distinguish between "Claude told me how it got its answer" and "the statistical model made up some text that looks like reasons but have nothing to do with what the model does".

Yes, I also think this is common and a problem. / Thanks for stating it clearly! ... Though I'm not sure if it maps to what others on the thread were trying to convey.


If people think LLMs and humans are equal, people will treat humans the way they treat LLMs.


Looking over the comment chain as a whole, I still have some questions. Is it fair to say this is your main point?...

> Also, Claude doesn’t “think” anything, I wish they’d stop with the anthropomorphizations.

Parsing they above leads to some ambiguity: who do you wish would stop? Anthropic? People who write about LLMs?

If the first (meaning you wish Claude was trained/tuned to not speak anthropomorphically and not to refer to itself in human-like ways), can you give an example (some specific language hopefully) of what you think would be better? I suspect there isn't language that is both concise and clear that won't run afoul of your concerns. But I'd be interested to see if I'm missing something.

If the second, can you point to some examples of where researchers or writers do it more to your taste? I'd like to see what that looks like.


Wait, we went from "they don't think" to "they only think on demand?"


Since we have no idea how humans think, that's a pretty unfair and unanswerable question.

Humans wrote LLMs, so it's pretty fair to say one is a lot more complex than the other lol


> Humans wrote LLMs, so it's pretty fair to say one is a lot more complex than the other

That's not actually a logical position though is it? And either way I'm not sure "less complex" and "incapable of thought" are the same thing either.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: