Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Shipping" wouldn't be a problem, they could just run it from a network drive. Their PCs were networked, they needed to test deathmatches after all ;)

And the compilation speed difference wouldn't be small. The HP workstations they were using were "entry level" systems with (at max spec) a 100MHz CPU. Their Alpha server had four CPUs running at probably 275MHz. I know which system I would choose for compiles.



> "Shipping" wouldn't be a problem, they could just run it from a network drive.

This is exactly the shipping I'm talking about. The gains would be so miniscule (because, again, and incremental compile was never actually slow even on the PC) and the network overhead adds up. Especially back then.

> just run it from a network drive.

It still needs to be transferred to run.

> I know which system I would choose for compiles.

All else equal, perhaps. But were you actually a developer in the 90s?


Whats the problem? 1997? They were probably using 10BaseTX network, its 10Mbit... Using Novel Netware would allow you to trasnfer data at 1MB/s.. quake.exe is < 0.5MB.. so trasnfer will take around 1 sec..


Not sure what you mean by "problem". I said miniscule cancels out miniscule.


Networking in that era was not a problem. I also don’t know why you’re so steadfast in claiming that builds on local PCs were anything but painfully slow.

It’s also not just a question of local builds for development — people wanted centralized build servers to produce canonical regular builds. Given the choice between a PC and large Sun, DEC, or SGI hardware, the only rational choice was the big iron.

To think that local builds were fast, and that networking was a problem, leads me to question either your memory, whether you were there, or if you simply had an extremely non-representative developer experience in the 90s.


Again, I have no idea what you mean by networking being a "problem".


You keep claiming it somehow incurred substantial overhead relative to the potential gains from building on a large server.

Networking was a solved problem by the mid 90s, and moving the game executable and assets across the wire would have taken ~45 seconds on 10BaseT, and ~4 seconds on 100BaseT. Between Samba, NFS, and Netware, supporting DOS clients was trivial.

Large, multi-CPU systems — with PCI, gigabytes of RAM, and fast SCSI disks (often in striped RAID-0 configurations) — were not marginally faster than a desktop PC. The difference was night and day.

Did you actively work with big iron servers and ethernet deployments in the 90s? I ask because your recollection just does not remotely match my experience of that decade. My first job was deploying a campus-wide 10Base-T network and dual ISDN uplink in ~1993; by 1995 I was working as a software engineer at companies shipping for Solaris/IRIX/HP-UX/OpenServer/UnixWare/Digital UNIX/Windows NT/et al (and by the late 90s, Linux and FreeBSD).


Ok that's not what I said. So we'll just leave it there.


That's exactly what you said, and it was incorrect:

> This is exactly the shipping I'm talking about. The gains would be so miniscule (because, again, and incremental compile was never actually slow even on the PC) and the network overhead adds up. Especially back then.

The network overhead was negligible. The gains were enormous.


>> I said miniscule cancels out miniscule.

> You keep claiming it somehow incurred substantial overhead

This is going nowhere. You keep putting words in my mouth. Final message.


Jesus Christ. Networking was cheap. Local builds on a PC were expensive. You are pedantic, foolish, and wrong.

Were you even a developer in the 90s? Are you trying to annoy people?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: