Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On one hand you have 1980s symbolic AI that can produce true, coherent statements, and on the other you have GPT-2 which can produce really nifty gibberish. If you could put the two together maybe that would be some progress. I can see that getting transformers to emulate symbolic math might be a step in that direction.


that's not reasoning though. it's not even an expert system or prolog (because adversarial examples exist). it's very close to a million monkeys typing.


Where does a 100B neurons firing fit on to that spectrum?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: