Both AI Fanatics and AI Luddites need to touch grass.
We work in Software ENGINEERING. Engineering is all about what tools makes sense to solve a specific problem. In some cases, AI tools do show immediate business value (eg. TTS for SDR) and in other cases this is less obvious.
This is all the more reason why learning about AI/ML fundamentals is critical in the same way understanding computer architecture, systems programming, algorithms, and design principles are critical to being a SWE, because then you can make a data-driven judgment on whether an approach works or not.
Given the number of throwaway accounts that commented, it clearly struck a nerve.
There is a lot of work that goes on before even reaching the point to write code.
For example, being able to vibecode a UI wireframe instead of being blocked for 2 sprints by your UI/UX team or templating an alpha to gauge customer interest in 1 week instead of 1 quarter is a massive operational improvement.
Of course these aren't completed products, but customers in most cases can accept such performance in the short-to-medium term or if it is part of an alpha.
This is why I keep repeating ad nauseum that most decisionmakers don't expect AI to replace jobs. The reality is, professional software engineering is about translating business requirements into tangible products.
It's not the codebase that matters in most cases - it's the requirements and outcomes that do. Like you can refactor and prettify your codebase all you want, but if it isn't directly driving customer revenue or value, then that time could be better spent elsewhere. It's the usecase that your product enables which is why they are purchasing your product.
> The reality is, professional software engineering is about translating business requirements into tangible products.
and most requirements (ime anyways) are usually barely half-baked and incomplete causing re-testing and re-work over and over which are the real bottlenecks...
ai/vibe coding may make that cycle faster but idk it might actually make things worse long-term because now the race course has rubber walls and there is less penalty just bouncing left and right instead of smoothly speeding down the course to the next destination...
> most requirements (ime anyways) are usually barely half-baked and incomplete causing re-testing and re-work over and over which are the real bottlenecks...
> ai/vibe coding may make that cycle faster but idk it might actually make things worse long-term
By making the cycle faster it reduces the impact while also highlighting issues within the process - there are too many incompetent PMs and SWEs.
Additionally, in a lot of cases a PM won't tell you that you might actually be working on checkbox work that someone needs to do but doesn't justify an entire group of 2-3 SWEs because then you obviously won't do the work. This kind of work is ripe for being automated away via vibecoding or agents.
A good reference for this is how close is the feature you are working on directly aligned with revenue generation - if your feature cannot be directly monetized as it's own SKU or as a part of a bundle, you are working on a cost center, and cost centers are what we want to reduce either by automating them away, offshoring them, or doing a mix of both.
The reality is that perfection is the enemy of good, and this requires both Engineers and PMs working together to negotiate on requirements.
If this does not happen at your workplace, you are either working on a cost center feature that doesn't matter, you are viewed as a less relevant employee, or you are working at a bad employer. Either way it is best for you career to leave.
In my experience, if you've actually chatted with executive leadership teams in most F500s, when they are thinking about "AI Safety" they are actually thinking about standard cybersecurity guardrails like zero-trust, identity, authn/z, and API security with an added layer of SLAs around deterministic output.
But by being able to constantly interate and experiment, companies can release features and products faster with better margins - getting a V1 out the door in 1 sprint and spending the rest of the quarter adding guardrails is significantly cheaper than spending 1 quarter building V2 and then spending 1 more quarter building the same guardrails anyhow.
Basically, we're returning to the same norms in the software industry that we had pre-COVID around building for pragmatism instead of for perfection. I saw a severe degradation in the quality of SWEs during and after COVID (too many code monkeys, not enough engineers/architects).
i think that is the ideal situation, but i am probably a bit pessimistic from my experience; i feel like people will experiment more but it will be more like throwing things at the wall to see what sticks instead of more focused customer research (by devs or pm/pdms alike)... anyways just my feels...
> reality is that perfection is the enemy of good
not to take too much from your point, but i would slightly modify this to something like "perfection is the enemy of good but good can be the enemy of what is needed"; we should know what the customer needs and build that, not just something (more) good... sorry for the pedantry lol we are probably not disagreeing here but anyway
We work in Software ENGINEERING. Engineering is all about what tools makes sense to solve a specific problem. In some cases, AI tools do show immediate business value (eg. TTS for SDR) and in other cases this is less obvious.
This is all the more reason why learning about AI/ML fundamentals is critical in the same way understanding computer architecture, systems programming, algorithms, and design principles are critical to being a SWE, because then you can make a data-driven judgment on whether an approach works or not.
Given the number of throwaway accounts that commented, it clearly struck a nerve.