I can't tell if this is sarcasm, but if not, you cant rely on the thing that produced invalid output to validate it's own output. That is fundementally insufficient, despite it potentially catching some errors.
This but unironically. Of course review your own work. But QA is best done by people other than those who develop the product. Having another set of eyes to check your work is as old as science.
That’s something that more than half of humans would disagree with (exact numbers vary but most polls show that more than 75% of people globally believe that humans have a soul or spirit).
But ignoring that, if humans are machines, they are sufficiently advanced machines that we have only a very modest understanding of and no way to replicate. Our understanding of ourselves is so limited that we might as well be magic.
I mean there is some wisdom to that, most teams separate dev and qa and writers aren't their own editors precisely because it's hard for the author of a thing to spot their own mistakes.
When you merge them into one it's usually a cost saving measure accepting that quality control will take a hit.
What if "the thing" is a human and another human validating the output. Is that its own output (= that of a human) or not? Doesn't this apply to LLMs - you do not review the code within the same session that you used to generate the code?
I think a human and an LLM are fundamentally different things, so no. Otherwise you could make the argument that only something extra-terrestrial could validate our work, since LLM's like all machines are also our outputs.
I have had other LLMs QA the work of Claude Code and they find bugs. It's a good cycle, but the bugs almost never get fixed in one-shot without causing chaos in the codebase or vast swaths of rewritten code for no reason.
> you cant rely on the thing that produced invalid output to validate it's own output
I've been coding an app with the help of AI. At first it created some pretty awful unit tests and then over time, as more tests were created, it got better and better at creating tests. What I noticed was that AI would use the context from the tests to create valid output. When I'd find bugs it created, and have AI fix the bugs (with more tests), it would then do it the right way. So it actually was validating the invalid output because it could rely on other behaviors in the tests to find its own issues.
The project is now at the point that I've pretty much stopped writing the tests myself. I'm sure it isn't perfect, but it feels pretty comprehensive at 693 tests. Feel free to look at the code yourself [0].
I'm not saying you can't do it, I'm just saying it's not sufficient on its own. I run my code through an LLM and it occasionally catches stuff I missed.
Thanks for the clarification. That's the difference though, I don't need it to catch stuff I missed, I catch stuff it misses and I tell it to add it, which it dutifully does.
I can't tell if that is sarcasm. Of course you can use the same model to write tests. That's a different problem altogether, with a different series of prompts altogether!
When it comes to code review, though, it can be a good idea to pit multiple models against each other. I've relied on that trick from day 1.