Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For the audience here, the opposite side of the coin is more relevant: Why don't you read software research?

Based on this and other articles (and on experience), it's an especially underutilized resource. By reading it, you would gain an advantage over competition. Why aren't you using this advantage that is there for the taking?

And why don't we see papers posted to HN?



Because it's usually not that useful. I have a friend who was software-adjacent, and would post all these exciting studies showing that this or that practice was a big deal and massively boosted productivity. And without fail, those studies were some toy experiment design that had nothing to do with actual real-world conditions, and weren't remotely strong enough to convince me to up-end opinions based on my actual experience.

I'm sure there are sub-fields where academic papers are more important -- AI research, or really anything with "research" in the name -- but if you're just building normal software, I don't think there's much there.


Thanks for your POV.

> those studies were some toy experiment design that had nothing to do with actual real-world conditions

Isn't that the nature of understanding and applying science? Science is not engineering: Science discovers new knowledge. Applying that knowledge to the real world is engineering.

Perhaps overcoming that barrier, to some degree, is worthwhile. In a sense, it's a well-known gap.


The question is whether spherical cow research tells you anything that holds up once you introduce the complications of reality into it. In physics, it clearly does. In economics, I think it does in a lot of cases (though with limits). In software engineering... well, like I say, there are areas where I'm sure it does, but research about e.g. strong typing or unit tests or PR review or whatever just doesn't have the juice, IME.


In real-world software development, managing complexity is often (usually) the core of the challenge. A simplified example, is leaving out the very thing that is the obstacle to most good software development. In fact, it is sometimes the case that doing something that helps with managing complexity, will impair performance as measured in some other way. For example, it may slow execution speed by some amount, but allow the software to be broken into smaller pieces each of which is more compehensible. Managing this tradeoff is the key to much software development. If you test with "toy experiment design", you may be throwing out the very thing that is most important to study.


Great point. I think that also emphasizes the necessity of the D in R&D: The research has to be adapted to the real world to be useful, for example to organizational frameworks and processes that manage complexity as you say.

Most software organizations I know don't have anything like the time to do D (to distinguish it from software development), except in a few clear high-ROI cases. Big software companies like Microsoft and Google have research divisions; I wonder how much they devote to D as opposed to R, and how much of that is released publicly.


Well, for example, consider this recent study that claimed developers using AI tools take 19% longer to finish tasks [1].

This was their methodology:

> we recruited 16 experienced developers from large open-source repositories (averaging 22k+ stars and 1M+ lines of code) that they’ve contributed to for multiple years. Developers provide lists of real issues (246 total) that would be valuable to the repository—bug fixes, features, and refactors that would normally be part of their regular work. Then, we randomly assign each issue to either allow or disallow use of AI while working on the issue.

Now consider the question of whether you expect this research to generalize. Do you expect that if you / your friends / your coworkers started using AI tools (or stopped using AI tools) that the difference in productivity would also be 19%? Of course not! They didn't look at enough people or contexts to get two sig figs of precision on that average, nor enough to expect the conclusion to generalize. Plus the AI tools are constantly changing, so even if the study was nailing the average productivity change it would be wrong a few months later. Plus the time period wasn't long enough for the people to build expertise, and "if I spend time getting good at this will it be worth it" is probably the real question we want answered. The study is so weak that I don't even feel compelled to trust the sign of their result to be predictive. And I would be saying the same thing if it reported 19% higher instead of 19% lower.

I don't want to be too harsh on the study authors; I have a hard time imagining any way to do better given resource constraints and real world practicalities... but that's kind of the whole problem with such studies. They're too small and too specific and that's really hard to fix. Honestly I think I'd trust five anecdotes at lunch more than most software studies (mainly because the anecdotes have the huge advantage of being from the same context I work in). Contrast with medical studies where I'd trust the studies over the anecdotes, because for all their flaws at least they actually put in the necessary resources.

To be pithy: maybe we upvote Carmack quotes more than software studies because Carmack quotes are informed by more written code than most software studies.

[1]: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...


Taking into account issues like that reading critically, which is great and essential. Dismissing ideas on that basis - often done on HN generally, even for large medical studies - is intellectually lazy, imho:

Life is full of flaws and uncertainty; that is the medium in which we swim and breath and work. The solution is not to lie at the bottom until the ocean becomes pure H2O; the trick is to find value.


> I don't want to be too harsh on the study authors

Well, I'll do it for you. There is much of attention grabbing bull*it. For example I've seen on LinkedIn study claiming 60% of Indians daily using AI in their jobs, and only 10% of Japanese. You can guess who did it, very patriotic, but far from the reality.


For me it's a discovery problem. I have a hard time finding papers to read. Where do you go to find interesting or relevant papers?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: