Exactly 2 years ago I remember people calling AI stochastic parrots with no actual intellectual capability and people on HN weren’t remotely worried that AI would take over there jobs.
I mean in 2 years the entire mentality shifted. Most people on HN were just completely and utterly wrong (also quite embarrassing if you read how self assured these people were, this is like 70 percent of HN at the time).
First AI is clearly not a stochastic parrot and second it hasn’t taken our jobs yet but we can all see that potential up ahead.
Now we get articles like this saying your skills will atrophy with AI because the entire industry is using it now.
I think it’s clear. Everyone’s skills will atrophy. This is the future. I fully expect in the coming decades that the generation after zoomers have never coded ever without the assistance of AI and they will have an even harder time finding jobs in software.
Also: because the change happened so fast you see tons of pockets of people who aren’t caught up yet. People who don’t realize that the above is the overarching reality. You’ll know you’re one of these people if AI hasn’t basically taken over your work place and you and your coworkers aren’t going all in on Claude or Codex. Give it another 2 years and everyone will flip here too.
About a year ago, another commenter said this in response to the question "Ask HN: SWEs how do you future-proof your career in light of LLMs?":
> "I’m a senior and LLM’s never provide code that pass my sniff test, and it remains a waste of time."
Even a year ago that seemed like a ridiculous thing to say. LLM's have made one thing very clear to me: A massive percentage of developers derive their sense of self worth from how smart coding makes them feel.
Yes. If one thing is universal among people is that they can’t fully accept reality at face value if that reality is violating their identity.
What has to happen first is that people need to rebuild their identity before they can accept what is happening and that rebuilding process will take longer then the rate at which AI is outrunning all of us.
What is my role in tech if for the past 20 years I was a code ninja but now AI can do better than me? I can become a delegator or manager to AI, a prompt wizard or some leadership role… but even this is a target for replacement by AI.
AI doesn't need or care about "high quality" code in the same ways we define it. It needs to understand the system so that it can evolve it to meet evolving requirements. It's not bound by tech debt in the same way humans are.
That being said, what will be critical is understanding business needs and being able to articulate them in a manner that computers (not humans) can translate into software.
Well, its taken blame for the job cutting due to the broad growth slowdown since COVID fiscal and monetary stimulus was stopped and replaced with monetary tightening, and then most recently the economy was hit with the additional hammers of the Trump tariff and immigration policies, as lots of people want to obscure, deny, and distract from the general economic malaise (and because many of the companies, and even more of their big investors, involved are in incestuous investment relationships with AI companies, so "blaming" AI for the cuts is also a form of self-serving promotion.)
Two years ago there were also hundreds of people constantly panic-posting here about how our jobs would be gone in a month, that learning anything about programming was now a waste of time and the entire profession was already dead, with all other knowledge work guaranteed to follow. People were posting about how they were considering giving up on CS degrees because AI would make them pointless. The people who used language like "stochastic parrots" were regularly mocked by AI enthusiasts, and the AI enthusiasts were then mocked in return for their absurd claims about fast take-off and imminent AGI. It was a cesspool of bad takes coming from basically every angle, strengthening in certainty as they bounced off each other's idiocy.
Your memory of the discourse of that era has apparently been filtered by your brain in order to support the point you want to make. Nobody who thoughtlessly adopted an extreme position at a hinge point where the future was genuinely uncertain came out of that looking particularly good.
Bro. You’re gonna have a hard time finding people panic posting about how they’re going to lose their jobs in a month. Literally find me one. Then show me that the majority of people posting were panicking.
That is literally not what happened. You’re hallucinating. The majority of people on HN were so confident in their coding abilities that they weren’t worried at all. Just a cursory glance at the conversations back then and that is what you will see OVERALL.
No it very clearly is. Even still today, it is obvious that it has zero understanding of anything and it's just parroting training data arranged in different ways.
No. Many of the answers it produces can only be attributed to intelligence. Not all but a many can be. We can prove that these answers are not parroted.
As for “understanding” we can only infer this from input and output. We can’t actually know if it “understands” because we don’t actually know how these things work and in addition to that, we don’t have a formal definition of what “understanding” is.
Yeah -- stochastic just implies a probabilistic method. It's just that when you include enough parameters your probabilities start to match the actual space of acceptable results really really well. In other words, we started to throw memory at the problem and the results got better. But it doesn't change the fundamentals of the approach.
In my experience, it's not that the term itself is incorrect but more so people use it as a bludgeoning force to end conversations about the technology. Rather than, what should happen, is to invite nuance about how it can be utilized and it's pitfalls.
Colloquially, it just means there’s no thinking or logic going on. LLMs are just pattern matching an answer.
From what we do know about LLMs we do know that it is not trivial pattern matching, the output formulated is literally by the definition of machine learning itself completely original information not copied from the training data.
But AI is still a stochastic parrot with no actual intellectual capability... who actually believes otherwise? I figured most people had played with local models enough by now to understand that it's just math underneath. It's extremely useful, but laughably far from intelligence, as anyone who has attempted to use Claude et al for anything nontrivial knows.
This quote is so telling. I’m going to be straight with you and this is my opinion so you’re free to disagree.
From my POV you are out of touch with the ground truth reality of AI and that’s ok because it has all changed so fast. Everything in the universe is math based and in theory even your brain can be fully modelled by mathematics… it’s a pointless quote.
The ground truth reality is that nobody and I mean nobody understands how LLMs work. This isn’t me making shit up, if you know transformers, if you know the industry and if you even listened to the people behind the technology who make these things… they all say we don’t know how AI works.
But we do know some things. We know it’s not a stochastic parrot because in addition to the failures we’ve seen plenty of successes to extremely complicated problems that are too non trivial for anything other than an actual intelligence to solve.
In the coming years reality will change so much that your opinion will flip. You might be so stubborn as to continue calling it a stochastic parrot but by then it will just be lip service. Your current reaction is normal given the paradigm shift happened so fast and so recently.
There’s tons more where that came from. Like I said lots of people are out of touch because the landscape is changing so fast.
What is baffling to me is that not only are you unaware of what I’m saying but you also think what I’m saying is batshit insane despite the fact that people in the center of it all who are creating these things SAY the same thing. Maybe it’s just terminology…understanding how t build an LLM is not the same as understanding why it works or how it works.
Either way I can literally provide tons and tons more of evidence to the contrary if you’re still not getting it: We do not understand how LLMs work.
Also you can prompt an LLM about whether or not we understand LLMs it should tell the same thing I’m saying along with explaining transformers to you.
That's a CEO of an AI company saying his product is really superintelligent and dangerous and nobody knows how it works and if you don't invest you're going to be left behind. That's a marketing piece, if you weren't aware.
Just because the restaurant says "World's Best Burgers" on its logo doesn't make it true.
Geoffrey Hinton father of AI who quit his job at Google to warn people about AI. What’s his motivation? Altruism.
Man it’s not even about people saying things. If you knew how transformers and LLMs work you would know even for the most basic model we do not understand how they work.
I mean at a minimum I understand how they work, even if you don't. So the claim that "nobody and I mean nobody understands how LLMs work" is verifiably false.
Did you not look at the evidence I posted? It’s not about you or I it’s about humanity. I have two on the ground people who are central to AI saying humanity doesn’t understand AI.
If you say you understand LLMs then my claim is then that you are lying. Nobody understands these things and people core to building these things are in absolute agreement with me.
I build LLMs for a living, btw. So it’s not just other experts saying these things.. I know what I’m talking about on a fundamental level.
Are you going to address a single point I or others have made? Or you gonna dodge everything with some dismissive remark? I think it’s clear you’re wrong.
You know one thing an LLM does better than me and many other people? It admits it’s wrong after it’s been proven wrong. Humans, including me have a hard time doing that but I’m not the one that’s wrong here. You are wrong, and that’s ok. Don’t know why people need to go radio silent or say stupid shit just to dodge the irrefutable reality of being completely and utterly wrong.
I’ve seen it solve a complex domain specific problem and build a basis of code in 10 minutes what took a year for a human to do. And it did it better.
I’ve also seen it fuck up in the same way you describe. So do I weigh and balance these two pieces of contrasting evidence to form a logical conclusion? Or do I pick and choose one of pieces of evidence that is convenient to my world view? What should I do? Actually why don’t you tell me what you ended up doing?
Why does it even matter if it is a stochastic parrot? And whose to say that humans aren't also?
Imagine the empire state building was just completed, and you had a man yelling at the construction workers: "PFFT that's just a bunch of steel and bricks"
Are you serious? Sam Altman and a legion of Silicon Valley movers and shakers believe otherwise. How do you think they gather the billions to build those data centers. Are they right? Are you right? We don't really know, do we...
Sam Altman is the modern day PT Barnum. He doesn't believe a damn thing except "make more money for Sam Altman", and he's real good at convincing people to go along with his schemes. His actions have zero evidential value for whether or not AI is intelligent, or even whether it's useful.
Maybe not, but I was answering to "nobody believes", not to whether AI is intelligent or not (which might just be semantics anyway). Plenty believe, especially the insiders working on the tech, who know it much better than us. Take Ilya Sutskever, of "do you feel the AGI" fame. Labelling them all as cynical manipulators is delusional. Now, they might be delusional as well, at least to some degree - my bet is on the latter - but there are plenty of true believers out there and here on HN. I've debated them in the past. There are cogent arguments on either side.
> Are you serious? Sam Altman and a legion of Silicon Valley movers and shakers believe otherwise. How do you think they gather the billions to build those data centers. Are they right? Are you right? We don't really know, do we...
The money is never wrong! That's why the $100 billion invested in blockchain companies from 2020 to 2023 worked out so well. Or why Mark Zuckerberg's $50 billion investment in the Metaverse resulted in a world-changing paradigm shift.
It's not that the money can predict what is correct, it's that it can tell us where people's values lie.
Those people who invested cash in blockchain believed that they could develop something worthwhile on the blockchain.
Zuckerberg believed the Metaverse could change things. It's why he hired all of those people to work on it.
However, what you have here are people claiming LLMs are going to be writing 90% of code in the next 18 months, then turning around and hiring a bunch of people to write code.
There's another article posted here, "Believe the Checkbook" or something like that. And they point out that Anthropic had no reason to purchase Bun except to get the people working on it. And if you believe we're about to turn a corner on vibe coding, you don't do that.
> However, what you have here are people claiming LLMs are going to be writing 90% of code in the next 18 months, then turning around and hiring a bunch of people to write code.
Very few people say this. But it’s realistically to say at the least in the next decade our jobs are going out the window.
Someone also believed the internet would take over the world. They were right.
So we could be right or we could be wrong. What we do know is that from 2 years ago a lot of what people were saying or “believed” about LLMs are now categorically wrong.
I mean in 2 years the entire mentality shifted. Most people on HN were just completely and utterly wrong (also quite embarrassing if you read how self assured these people were, this is like 70 percent of HN at the time).
First AI is clearly not a stochastic parrot and second it hasn’t taken our jobs yet but we can all see that potential up ahead.
Now we get articles like this saying your skills will atrophy with AI because the entire industry is using it now.
I think it’s clear. Everyone’s skills will atrophy. This is the future. I fully expect in the coming decades that the generation after zoomers have never coded ever without the assistance of AI and they will have an even harder time finding jobs in software.
Also: because the change happened so fast you see tons of pockets of people who aren’t caught up yet. People who don’t realize that the above is the overarching reality. You’ll know you’re one of these people if AI hasn’t basically taken over your work place and you and your coworkers aren’t going all in on Claude or Codex. Give it another 2 years and everyone will flip here too.