Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah once it hits, it really explains so much of the issues in large organizations (and explains a lot about small organizations / indie development as well). For example, why is it that mergers and buyouts rarely work out? Conway's law largely explains it! When a product is bought by an organization, the result is an integration over time of both organization org-charts combined. If the companies are both large, this immediately becomes an intractable mess. If the original company was small and the new company is large, it _also_ becomes intractable because now different teams will have to communicate to manage or split components that don't fit the communication patterns of the new organization, and will lead to large refactors and rewrites and in the end the essential qualities of the original software will inevitably change.

Even if you know this, there is nothing you can do - you have to organize yourself somehow and how you do that can't be left up to just mirror whatever this piece of software that was brought in happens to be organized - because _that too_ doesn't reflect an actual org chart but is an integration over time of all org charts that have touched it.

I don't even know how to think about the implications this has for the use of open source software and how open source is developed, like... I think there are huge opportunities for research into Conway's law and how it relates to software development practices, software quality, etc.



I have this naively optimistic hope that AIs will allow orgs to scale units past Dunbar’s number.

We humans can’t effectively communicate with groups larger than about 7 people, but AIs have no such limits. We could all simply talk to one manager that can integrate everything and everyone into a unified whole.

It’s like the ship Minds in the Culture series having hundreds or even thousands of conversations at once.


The more the initial fade of AI assisted work sets in, and given the inherent vagueness and unpredictability of managing, I'm eagerly awaiting not my job, but my bosses job being replaced by AI. There's no need for exactness, but superficial clarity, decisiveness and seeming coherence.


That's an interesting thought, yeah... technology and communication are definitely interwoven, things are possible today that were impossible before computers simply due to communication constraints. One could imagine a large number of practically autonomous small teams organized "automatically", more mirroring a neural network than a hierarchy.


The problem is always communication because it is the means to cooperate. The root of many issues in software development is the simple fact that instead of letting the required communication pathways define the organization, it is the organization which defines the pathways and through that causes communication obstructions.

"Not Just Bikes" has a good few videos, including https://www.youtube.com/watch?v=n94-_yE4IeU and a couple more that talk about problems that larger roads effectively cause more traffic ("induced demand", "traffic generation"). Organizational structures are like roads, and like roads they can get overloaded, which in turn means traffic needs to be reduced. There is even communication jam, and to combat that something like enforced communication reduction (lower information throughput), etc. to keep this manageable. That also causes decision making being done with less and less information the more steps are included in a communication chain (like upwards/downwards in a hierarchy), which in turn means the quality of decision making is severely hampered by it.

This whole mess is also the reason why the agile manifesto puts humans before processes and other such things, in fact it implies you change even the organizational setup to fit the project, not the other way around. But in the space of "managerial feudalism" (David Graeber) this is pretty much impossible to pull off.


The tragedy of agile is that the practices that are labelled agile in practice tend to exemplify the exact opposite of everything that the manifesto advocates..


You might be correct, but the AI minds that you are contemplating don't exist yet, and there is no reason to think that they will be developed from current LLMs.

Once again, seizing the term AI to mean LLMs and other current generative techniques has poisoned clear thinking. When we think "AI", we are thinking about HAL and Cortana and C3PO, which is not what we actually have.


interestingly positive, I think I might agree with you. It seems the nr.1 problem of good leaders in organizations is that they can't be everywhere at once. But an AI can be.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: