Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Prediction: Fuck no.

AI is unreliable as it is. It might make formal verification a bit less work intensive but the last possible place anyone would want the AI hallucinations are in verification.





The whole point I think, though, is that it doesn’t matter. If an LLM hallucinates a proof that passes the proof checker, it’s not a hallucination. Writing and inspecting the spec is unsolved, but for the actual proof checking hallucinations don’t matter at all.

Isnt the actual proof checking what your traditional formal verification tool does? I would have thought that 99% of the work of formal verification would be writing the spec and verifying that it correctly models the problem you are trying to model.

The specification task is indeed a lot of work. Driving the tool to complete the proof is often also a lot of work. There are many fully automatic proof tools. Even the simplest like SAT solvers run into very hard computational complexity limits. Many “interactive” provers are more expressive and allow a human to help guide the tool to the proof. That takes intuition, engineering, etc.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: