Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Your colleagues using the tech will be far ahead of you soon, if they aren’t already.




Far ahead in producing bugs, far ahead in losing their skills, far ahead in becoming irrelevant, far ahead in being unable to think critically, that's absolutely right.

The new tools have sets of problems they are very good at, sets they are very bad at and they are generally mediocre at everything else. Learning those lessons isn’t easy, takes time, and will produce bugs. If you aren’t making those mistakes now with everyone else, you’ll be doing them later when you do decide to start catching up and it will be more noticeable then.

Disagree. For the tools to become really useful (and fulfill the expectations of the people funding them) they will need to produce good results without demanding years of experience understanding their foibles and shortcomings.

I think there’s a chance the people funding this make the returns they hope for but it’ll be a new business model that gets them there, not producing better results. The quality of results have been roughly stable for too long to expect meaningful increases regularly anymore.

The AI hucksters promise us that these tools are getting exponentially better (lol) so the catch up should be exponentially reduced.

I see the sarcasm and agree with but just in case anyone sees this, we were getting exponentially better back in the early days but very much hitting diminishing returns now. We’re probably not seeing any large improvements again now with this tech.

And all of those things (good at, bad at, the lessons learned on current models current implementation) can change arbitrarily with model changes, nudges, guardrails, etc. Not sure that outsourcing your skillset on the current foundation of sand is long term smart, even if it's great for a couple of months.

It may be those un-learning the previous iteration interactions once something stable arrives that are at a disadvantage?


The tools have been very stable for the past year or so. The biggest change I can think of is how much MCP servers have fallen off. I think they’re generally considered not worth the cost in context tokens now. The scope of changes needed to unlearn now with model changes or whatever else is on par with normal language/library updates we’ve been doing for decades. We’ve plateaued and it’s worth jumping now if you’re still in the fence.

Why would the AI skeptics and curmudgeons today not continue to dismiss the "something stable" in the future?

"The market can stay irrational longer than you can stay solvent" feels relevant here.

... I mean, what tools one is supposed to be using, according to the advocates, seems to completely change every six months (in particular, the goto excuse when it doesn't work well is "oh, you used foo You should have used bar which came out three weeks ago!", so I'm not sure that _experience_ is particularly valuable if these things ever turn out to be particularly useful.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: