Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In my current project the agent (GPT-5) isn't helpful at all. Damn thing, lying all the time to me.


They're idiot savants. Use them for their strengths. Know their weaknesses.


So, what are their strengths then? I've fed it with a detailed, very well documented and typed API description. Asking to construct me some not too hard code snippets based on that. GPT-5 then pretend to do the right thing, but actually is creating meaningless nonsense out of it. Even after I tried to reiterate and refine my tasks. Every junior dev is waaay better.


I recently had something no longer compile. I got bored sniffing around after maybe an hour, set Claude in Zed on to it, got a snack, and by the time I was back it had found the problem.

When I am unsure how to implement something, I give an LMM a rough description and then tell it to ask me five questions it needs to get a good solution. More often than not, that uncovers a blind spot.

LLMs remain unhelpful at writing code beyond trivial tasks though.


Parsing a thousand line stack trace and telling me what the problem was. Writing regexes. Spitting out ffmpeg commands.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: