Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Programmers At Work: Bill Gates (1986) (programmersatwork.wordpress.com)
308 points by ohjeez on Feb 5, 2014 | hide | past | favorite | 175 comments


Side note: The illustrated portrait of Gates that accompanies this article was done by, drumroll...my dad!

In fact, my pops did the illustrations for the entire series. Microsoft Press was one of his favorite clients back in the day. Growing up, he'd tell me stories of disorganized secretaries at Microsoft sending him awful reference photos, so he'd look up the execs in the phonebook and call them at home. Sometimes he'd get their wives/husbands on the phone, and he'd nicely explain that he was an artist, and would they mind if he borrowed the family photo album? Many fine stories from those cases, too.

Somewhat ironically, my dad never learned how to do art on a computer. Now he's a commercial construction inspector in Seattle.


Wow, thanks for sharing this! That's an awesome anecdote, and I really like his work.


You're welcome! I talked with my dad last night and he's flabbergasted the book is now online (in blog form, at least). He still has all the original illustrations sitting in a flat file in our garage. We discussed what to do with them, if anything, and I suggested the Computer History Museum might be interested. He liked the idea, so we're going to reach out soon. If anyone knows someone there, please drop a line to jackson.solway at gmail. Thanks!


Small world - nice!


That's a great story. Thank you for sharing!


I really liked this quote - "Programmers just starting out today never had to squeeze, so it’s a little harder for them to get the right religion because they always think of resources as being immediately available. Ten years ago every programmer ran into resource limitations, so the older programmers are always thinking about those things."

And --

"I still think that one of the finest tests of programming ability is to hand the programmer about 30 pages of code and see how quickly he can read through and understand it."

Does any company run "code comprehension" tests for hiring? Any thoughts?


As the CTO of a cloud startup, I did exactly that.

Apart from the usual brain-teaser and whiteboard coding, we gave candidates 20 minutes to browse through a small database project in eclipse. Afterwards, I checked whether they could explain the data structure of that database.

The outcome of this test was a much better predictor for competence than any other technical test. I can whole-heartedly recommend it.


Does this mean you recruited some people who couldn't explain the structure and you found they were no good anyway?


Sometimes you put less weight on an aspect that deserves more because the candidate showed other skills, 'cultural fit' etc.

Sometimes you realise that was a mistake :)


But the flip side of that is that optimizations that were made when resources were tight may bite you in the ass when resources become available. For example, the declaration for the WinMain entry point that still persists to this very day in Win32 and Win64 applications looks like this:

    int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpszCmdLine, int nCmdShow);
Note the hInstance and hPrevInstance bits. Back in the day Windows loaded the code for a Windows application's .EXE once into memory and then maintain several pointers to a separate data section for each running instance of the application. The hInstance and hPrevInstance are actually pointers to such datasections. One of the reasons for this was so that the program needed to only do certain kinds of initialization once; if future instances needed that data, they could copy it into themselves from the previous instance (remember, no protected memory!) with the GetInstanceData function. In addition, it was a simple way of telling if you were already running; window classes, for instance, were registered once per program, not once per instance.

Of course in Win32, instances of a program are their own processes with their own address spaces, so you can't do the kinds of clever tricks you could in Win16 to save time and memory. And hPrevInstance is always NULL. Win16 programming is full of hacks like this that made sense in their day, but are counterproductive now, though their vestiges remain on in Win32.


When I interviewed at Garmin they gave me a couple pages of their actual code and asked me to point out the bugs they introduced to it. Thirty pages seems like a bit much for an interview, but I definitely like the idea. Most jobs you'll be looking at other people's code way more often than writing your own.


I really like this idea, but it's not easy to apply. Nowadays, most jobs involve writing code that are based on layers and layers of abstraction, so a lot time, you have rely on the API/docs that are given to you.

On the other hand, if the job requires a lot of algos and low level code, this will be a good test to gauge an engineer's thought process and mental capability.


Many libraries and abstractions that I come across don't have much/any documentation (or it's out of date, etc), so the ability to quickly look at the source and understand it still seems valuable to more than just low level programmers.


I recently interviewed with one of the "big 10" tech companies and was asked to do similarly; I was given 4 pages instead of 30. I felt like it was a good and useful exercise.


I've done this for interviews - we showed them some code that had real bugs in it, as well as some poor design choices (we told them this in advance so they wouldn't think our real code was like that). We were looking for people who weren't afraid to tell us the code sucked and what they'd do to improve it. If they said it was fine as-is, or spotted very few of the bugs, that was a strong no-hire indication.

We tried sending the sample to them the day before, but there wasn't a strong correlation between doing that and good responses from the candidates. Basically, if they got it, they got it. And if they didn't, they didn't.


In 2013 I briefly contracted for a PHP shop. The hiring manager would invite people in for an interview and present them with the following code:

  mysql_query('
    INSERT INTO `products` (`name`, `descriptions`, `price`)
    VALUES ( '.$_GET["name"].', '.$_GET["description"].', '.$_GET["price"].');
  ');
Then he would ask, "how do you feel about that?" He claimed this question weeded out a lot of potential candidates.


I like it.

I don't know if you can use this kind of test on fresh-out-of-college hires, because a lot of professors don't teach security, and there may be plenty of other bad habits to break anyway.

But on any experienced hire -- even someone one year out of school -- it's perfect for screening out the codemonkeys.


I thought the irony of that was the masses of RAM that the .NET runtime consumes? Or am I missing that?

Every time I opened Management Studio for SQL Server 2005+, it took forever and caused massive disk churning, compared to the Win32 version for SQL Server 2000 and before, which didn't engage in abstract disk churn and memory gobbling.


Is that really a function of the app being written on top of .NET? It could be, but most often than not, that's due to poor architectural decisions. Pointing fingers without knowing the internal details isn't constructive criticism.


.NET applications should not just churn at random. There needs to be real memory pressure (was there?).

I wonder if something went wrong with the native image generation during setup? It may be trying to JIT it every time on startup. That could be quite slow and result in a lot of disk activity.


Is Management Studio 2005+ written on .NET? Any reference on that?


As far as I new it was. You noticed the flickering views and new-style GUI controls, eg. looking at the properties of a server caused massive update flicker, common to .NET controls (early versions only?)


I like the idea of a code comprehension test. I would give the code to the candidate 1-2 days before the interview, and let them ponder it.

I know, technically they can cheat and google API calls or show it to their friends, but I bet with a little prying in the interview you could uncover that.


In this context, taking help shouldn't be considered cheating. Its better that s/he looks up the API calls. It wouldn't be a comprehension test if all that is required is knowing a few API calls. Isn't the whole point of the exercise to see if the candidate can get past the API calls and understand the big picture?


I've had interviews that obfuscate some basic software system or algorithm. They were always just a page or two though, never 30.


"Ten years ago every programmer ran into resource limitations, so the older programmers are always thinking about those things."

Interestingly this is why I believe VCs let founders of start-ups get do partial cashing out in later rounds. In the later rounds when you are shooting to build a billion dollar business, it's not optimal for the company if you as CEO is thinking about your everyday finances or paying past debt.


I think you misunderstood the quote. BG is referring to hardware resource limitations... and saying it was a good thing in terms of encouraging efficient design.


Yeah you definitely misinterpreted that


IMO, that quote's garbage and really "back in my day". Our resources are just more powerful today. We still run into performance issues with things like bulk updates and computation times (MapReduce / distributed computing) quite frequently.


I wouldn't say the quote is garbage, there's truth to the idea that over time we have to worry a little bit less about cleverly dealing with resources, assuming the amount of resources we care about remains static... But of course they don't.

The really interesting thing about the quote is that it is from 1986 but could have been stated the same way in 1996, 2006 and likely in 2016 with the same "10 year back" qualifier and nobody would bat an eyelash. Systems are way more powerful but we're also pushing a LOT more data (in terms of depth of the data, breadth of the data, and types of data).

To be fair though, these days a lot of people see "spin up 25 more AWS instances" as a perfectly valid way of solving a bottleneck when compared to months of developer time whereas back in 1986 the decision would be a no-brainer in the other direction given that the equivalent of spinning up 25 AWS instances today would be economically infeasible back then for almost any commercial entity (even if you scaled back the amount of processing needed to fit in the context of that era).


>IMO, that quote's garbage and really "back in my day". Our resources are just more powerful today.

Which is the whole point if the quote. And the "back in my day" is 100% true in the field of programming. Ever heard of Moore's law? Ever tried to program with 8K RAM?

>We still run into performance issues with things like bulk updates and computation times (MapReduce / distributed computing) quite frequently.

Yes, we still run into "performance issues". But with MILLION times the data they ran into performance issues with.

Heck, we can now do a whole computer of the time interpreted in a web browser in Javascript and still get acceptable speed to play games in it.

Perspective man.


Nowadays in-browser Javascript is able to execute algorithms in under 1 second that would have taken tens of minutes or hours to compute in the late eighties. Back then, even a dumb spell-checker like that one implemented by Peter Norvig in Python, would have been problematic and was considered a major feat. If a software engineer from the eighties would travel forward in time, he'd be amazed that we are carrying super-computers in our pockets.


I like the implication that these "software engineers from the eighties" are absurdly rare creatures that no longer exist.

edit: but yes, it's amazing.


To illustrate coldtea's point:

http://bellard.org/jslinux/

You've just fully booted linux in your browser. That Linux sees 16 MB RAM for itself. If you are a web programmer, do you even try to see how much memory the browser spent for this, or any other page you click to?


Ironic language given that most "engineers" today can't write code without dependency on built in garbage collectors.

Do you really think developers these days are just as good at optimizing for limited system resources as they were in 1986?

"We still run into..." Who? Maybe you - but most developers aren't running into map/reduce problems every day. And when they do, they certainly aren't coding a solution from scratch.


I think what really happens is that programmer/software engineer profession have become a lot more accessible to the mass. Back in billg days when he started programming, I would bet less than 1000 kids around the country had access to a computer. In 1986, the total number of programmers would've surely been less than 10,000 and most of them worked for MS, IBM, DEC, etc. In the current time, half of SF are software engineers and entrepreneurs :) I guess my point is there are still a lot of people who work on the hard core stuff like Linux/Windows kernels where every byte counts, or JVM/LLVM where data structures/algorithms are rewritten over and over to get a bit more performance. I would say the number of programmers who have to care about the optimizing system resources may be even more than back then, it's just that there are a lot more application developers these days.


I guess my original post was focused on map/reduce problems; I meant it to be more general. We take our site response time very seriously, and therefore our application code and data layer has to be very fast.

Yes boxes are faster and easier to spin up. The 'downside' to that is that many website on the internet are blazingly fast: Google, Amazon, Dropbox, etc. Because they are so fast (or perceptibly fast via tricks), the rest of the web is held to that higher standard. Twenty years ago, things taking a long time on a computer screen with no updates was acceptable. Today it really isn't because there are N other websites that are selling your exact product or experience.


Maybe these days other metrics are more important to optimize for? Ultimately it is about transforming money (resources) into output (more resources) and depending on times different things bottle-neck you there.


optimise for maintainability, reusability, composibility


and talent pool availability


>>Our resources are just more powerful today

Have you ever programmed a embedded system? Every worked on a 8-bit micro controller?

There are hard, very hard problems that come up which have to solved in with really limited resources.


I sure have. I've written k-maps and wired a multi-bit ALU by hand.

My point is that those problems aren't any harder. They're just different. (in many cases it's because the toolchains stink by modern standards--embedded systems tend to just crash with no helpful debugging information) BG and co stood on the shoulders of the guys who made their chips 'just work' just as we stand on their shoulders. To complain about how 'kids have it so easy these days' just makes you sound crotchety.


>My point is that those problems aren't any harder. They're just different.

Not really. The problems are made a lot harder by finite memory being more in play. Even with a good tool-chain when you actually have more data then you have RAM then you'll run into problems that are completely unique to an embedded world.

Code shrinks are very hard.


He answers:

"The hardest part is deciding what the algorithms are, and then simplifying them as much as you can. It’s difficult to get things down to their simplest forms. You have to simulate in your mind how the program’s going to work, and you have to have a complete grasp of how the various pieces of the program work together. The finest pieces of software are those where one individual has a complete sense of exactly how the program works. To have that, you have to really love the program and concentrate on keeping it simple, to an incredible degree."

This line alone demonstrates he was a good coder.


Paul Allen has said that Bill's particular gift when it came to programming was the ability to find the way to get to the needed outcome in the absolute fewest lines of code necessary.


Sometimes that approach is good. And then sometimes, after a decade of misuse, it just leads to a nightmare of a codebase.


Yet, being first to the market has ensured he made billions.

The definition of 'useful' and 'beautiful' varies based on circumstances.


He was concerned about how C allowed such "vast" programs that the developers couldn't comprehend the whole systems (even referencing > 4 KB code!). This was in 1986. No wonder I have so much trouble comprehending a moderately-sized, modern application written in C++.


OK, so Bill had really modern thinking. I mean if you apply what he said to modern software design, it still stands mostly true.

So here's a question - given his understanding of team management, product design etc, how the hell did Microsoft go so wrong and end up today with constantly-rebranded products that does not appear to have one unified direction, but rather feel like it's been split to a billion different directions?

Did teams in MS get bigger? He did mention that it was important that in a team there were a few people that truly and fully understand the codebase. I wonder if that is even possible in any of MS products nowadays

Edit:

Some really great choice quotes:

>I like to think the whole program through at a design level before I sit down and write any of the code. And once I write the code, I like to go back and rewrite it entirely one time.The most important part of writing a program is designing the data structures. The second most important part is breaking the various code pieces down. Until you really get in there and write it out, you don’t have the keenest sense of what the common subroutines should be. The really great programs I’ve written have all been ones that I have thought about for a huge amount of time before I ever wrote them

Bill is One of Us! I don't think I've ever read about him hacking before

> That refers to a program that molds itself to the user’s needs and the user’s interests over time. There are going to be more great word processors and spreadsheets, and we’ll use networking and graphics and new architectures

> Programmers just starting out today never had to squeeze, so it’s a little harder for them to get the right religion because they always think of resources as being immediately available. Ten years ago every programmer ran into resource limitations, so the older programmers are always thinking about those things.

I think we're still saying that today.

The interview was done in 1986, and he was talking about CDROMs which took off roughly 8 to 10 years after this interview (end of 3.1 and beginning of win95). They really did pursue the multimedia strategy well


> how the hell did Microsoft go so wrong and end up today with constantly-rebranded products that does not appear to have one unified direction, but rather feel like it's been split to a billion different directions?

He left before all that started to happen

> Bill is One of Us! I don't think I've ever read about him hacking before

I believe Gates even "made fun" of Jobs saying that Steve Jobs didn't know how to program.


> how the hell did Microsoft go so wrong

Well, at some point the software becomes too big for one person to understand, or even a small team of wicked smart architects to understand.

Think about how complex Windows 8 is compared to MS-DOS. How much more it does, how much more hardware it supports. It's many orders of magnitude more complicated.

When a high-quality software program gets beyond a certain size, you by necessity start to lose the uniformly high quality that marked its earlier days.


I don't think I've ever read about him hacking before

Oh, you didn't hear about that time he implemented BASIC in 4k? http://www.theregister.co.uk/2001/05/15/could_bill_gates_wri...


Oh, I'm familiar with that. I'm also familiar with the pancake flip algos.

What I've never read about before was his attitude to hacking


  > Bill is One of Us! 
Yes! I used to rewrite programs entirely one time too :)


Scott Guthrie was a big fan of throwaway prototyping:

http://weblogs.asp.net/scottgu/archive/2006/11/19/podcasts-a...

Seems he picked it up from Bill.


This would be six years after he attempted to negotiate a three-way split of the computer marketplace with IBM and Intel (who refused his proposal):

In the autumn of 1980, Bill's unfamiliarity with his new purchase [of QDOS] didn't stop him proposing, in a story told by Stanford's John Wharton who was Intel's second point man for negotiations, a three way carve up of the market between IBM, Intel and Microsoft - then a company with 30 employees. If historians are to conclude that Bill "thought big", they'll be correct - but they may also conclude that he didn't always "think legally". Market carve ups are a violation of antitrust laws.

http://www.theregister.co.uk/2006/06/17/saint_bill/

That said: Gates's comment that core programming teams are generally 4-5 devs suggests a pretty strong constant through the industry. It's rare for any specific module to grow much beyond that -- where you do see more people working on a particular piece of code, it's typically either highly modularized (and module contributors are working on sub-pieces), or the architecture is already highly refined, and the process is largely one of providing bugfixes or additional features to an established base.


Market carve-ups are not an inherent violation of anti-trust laws. Neither are possessing monopolies. Two extremely commonly held and mistaken beliefs.

Allow me to properly word it: market carve-ups, that cause consumer harm, and that the government chooses to prosecute and in which the government manages to prove consumer harm - are a violation of anti-trust laws.

If three companies carve up a market, benefit consumers, and the government chooses not to pursue - that is not a violation of anti-trust laws in any way shape or form. To make this point clearly, if the DOJ chose not to pursue Apple with anti-trust law regarding the ebook situation, what Apple did would not be considered a violation of anti-trust laws. Anti-trust laws are inherently very subjective, they're up to heavy interpretation on the government's side (and then further, what the government can demonstrate when it comes to harm).


This wasn't even a market carve-up. It was a proposal for IBM to single-source from Intel and Microsoft. That's perfectly legal for a new product that's starting out with 0% market share.

It was legal for Microsoft to single-source the CPU for the Surface from Nvidia. And it would have been similarly legal for IBM to single-source the OS for the PC from Microsoft.

What's more, Microsoft had 0% share of the OS market -- thus, no market power. And it had already agreed to produce BASIC for the IBM PC -- thus, no tying. Ipso facto, Microsoft could not possibly have been guilty of an antitrust violation in 1980 for offering to single-source DOS to IBM.

Only the Register could come up with such an illogical conclusion like that. But then, the Register is so virulently anti-Microsoft that they could probably criticize Altair BASIC on antitrust grounds.


This wasn't even a market carve-up.

Having heard Wharton's story directly from him, I can assure you that was being proposed as told by Wharton was in fact a three-way market carve-up.

• IBM would get business.

• Intel would get embedded devices (Wharton's own work was largely on an embedded controller that's used in automobiles, more units of it have been created than there are humans on Earth).

• Microsoft would get the hobbyist market ("home computing").

Your ad hominem on The Register (El Rag as I like to call it) doesn't address the fact that the source of the story is in fact John Wharton, and he has IIRC publicly discussed it elsewhere. I've got my own beefs with the Register (its irrational opposition to anthropogenic global warming comes to mind). And while long critical of Microsoft, that stance seems to have softened markedly in recent years. The stance of the site is pretty aggressively confrontational -- it's not "biting the hand that feeds IT" for nothing.


That's an interesting anecdote. But that still isn't an antitrust violation.

You need market power in order to restrain trade. Otherwise, you're just fantasizing about seizing control of a market without actually having the power to do so. For example, if you and I get together and divided the smartphone market 50/50 between the two of us, that would simply be laughable, not anti-competitive.

But if you do have market power, then your actions could actually be anti-competitive. If IBM had reached an agreement with, say, DEC to split the mainframe and minicomputer market between them, then that would have been an "unreasonable restraint of trade."

But Microsoft held 0% of the operating system market in 1980. In fact, in 1980, Microsoft had market power only in BASIC interpreters. As long as Microsoft didn't try to tie BASIC interpreters to some other product, it would've been very hard for Microsoft to violate antitrust law, even if it tried.

> Your ad hominem on The Register (El Rag as I like to call it) doesn't address the fact that the source of the story is in fact John Wharton, and he has IIRC publicly discussed it elsewhere.

An ad hominem is when someone claims that an argument is false simply because it originated from a certain source. That's not what I did. I attacked the Register after addressing the argument directly. That's not an ad hominem, that's just an attack on the Register.

P.S. Since the information came from John Wharton, it would be best if we could see what he actually said. Right now, Googling for "John Wharton IBM Microsoft Intel" gives only three relevant hits: two articles in the Register, and your original comment.


You need market power in order to restrain trade.

First off: that's not what the statute says. And again you've failed to provide any documentation for your claims (as adventured has also failed to do).

So I'll leave it there.

As for Wharton's making statements elsewhere: it's my recollection that he had. I haven't looked for them ... and no, don't see anything that's a specific match, though you can certainly place Wharton at Intel and having met with Gates while there.


> First off: that's not what the statute says. And again you've failed to provide any documentation for your claims (as adventured has also failed to do).

As I've already stated, it doesn't matter what the statute says. It matters what the courts have interpreted. See other thread: https://news.ycombinator.com/item?id=7188995

As for sources, see any textbook in antitrust law. Or just Google for the words "market power" and "restraint of trade. " These are not controversial concepts in antitrust law. You don't need a citation to say that the sky is blue.

There are such things are per se violations, but the courts have chipped away at this concept. For example, vertical segmentation used to be a per se violation, but today it is subject to the rule of reason.

> As for Wharton's making statements elsewhere: it's my recollection that he had. I haven't looked for them ... and no, don't see anything that's a specific match, though you can certainly place Wharton at Intel and having met with Gates while there.

It's certainly ironic that you don't provide a citation for this, when you're so aggressive about demanding citations from other people.


Conspiring to create a monopoly, however, is.

One of the reasons Intel refused the offer was that Andy Grove had established a goal: that the company would seek a monopoly position, but would do so by legal means. It's a philosophy which didn't entirely survive his departure from the company as you may recall Intel reached a settlement with the DoJ over anticompetitive monopolistic practices (though in fairness: the company did settle and fairly quickly).

The conspiracy and means Gates proposed, along with many, many other actions of Microsoft, did in fact constitute illegal monopolization, as Judge Thomas Penfield Jackson's findings of fact established: http://www.justice.gov/atr/cases/f3800/msjudgex.htm

As George Hoar, one of the authors of the Sherman Antitrust Act stated: "... [a person] who merely by superior skill and intelligence...got the whole business because nobody could do it as well as he could was not a monopolist..(but was if) it involved something like the use of means which made it impossible for other persons to engage in fair competition."

And in particular, the statute itself states: "Every contract, combination in the form of trust or otherwise, or conspiracy, in restraint of trade or commerce among the several States, or with foreign nations, is declared to be illegal."

So, while your hypothetical has certain merits, in the particulars of this situation, it rather markedly fails to apply, as established by both the relevant statute and legal findings.


Conspiracy to create a monopoly is also up to government discretion, ie purely subjective. It's only conspiracy if the Feds pursue it and prove it.

If I pursue a monopoly intentionally, the government can choose not to care about that fact at all. We can see the real nature of this subjectivism at play in how the Bush White House and the Clinton White House each dealt with Microsoft (and also in how IBM was dealt with across different presidencies). Selective enforcement, and selective consequences.

Ultimately a violation is what the government says it is, even with the case history. The ground rules for all of this have been constantly shifting for a century. They'll change from one DOJ to the next. What constitutes conspiracy? Ask the next DOJ, they'll have a different opinion than the last one.


Conspiracy to create a monopoly is ... purely subjective

You've asserted that twice. To the extent that a law not prosecuted is a law that doesn't exist, you're correct, however I've provided the statutory citation which states that "every" contract, etc., in restraint of trade, "is declared to be illegal".

I'm afraid the burden rests on you to provide some level of documentary proof of your assertions, which at present, stand on nothing.

http://blog.garrytan.com/grahams-hierarchy-of-disagreement-h...


> To the extent that a law not prosecuted is a law that doesn't exist, you're correct, however I've provided the statutory citation which states that "every" contract, etc., in restraint of trade, "is declared to be illegal".

Laws are not interpreted de novo, to mean whatever you think they mean. In our common-law system, laws are interpreted by the courts, over a long period of jurisprudence. In the case of the Sherman Act, we have over a hundred years of case history to guide us.

The courts say that it must be an "unreasonable" restraint of trade. After all, every contract is a restraint of trade! If a grocery store signs a contract with a farm to buy all of their lettuce, then the grocery store isn't going to the central produce market, where everyone has a chance to compete. If Apple signs an agreement with Samsung to source its ARM CPUs, that means that Nvidia doesn't get a chance to compete for the duration of the contract.

But that's completely ridiculous, right? They got the chance to compete for the contract. And that's exactly why it's not an antitrust violation. It's a restraint of trade, but it's very much reasonable.

Note that the word "unreasonable" doesn't actually appear in the Sherman Act. It just says "restraint of trade." The Courts added this test because they felt it would be counterproductive to interpret the Sherman Act literally. As you're doing.


Laws are not interpreted de novo

I'm not contesting that. Nor was I born yesterday.

I've asked you to provide references. You haven't.


> I've asked you to provide references. You haven't.

Google.

I even gave you the phrase to Google for: "unreasonable restraint of trade." In fact, add the term "every contract," and you'll find thousands of relevant hits, including Supreme Court cases going back a century.

If you don't like Googling, then I recommend picking up a good textbook on antitrust law -- "unreasonable" will be in the index. It's just such a fundamental concept to antitrust law. Rivers of ink have been spilled debating what kind of behavior is reasonable and what is unreasonable.

What you've just done is the antitrust equivalent of demanding a citation for the sky being blue. I give up. There's no way I can have a productive conversation about antitrust law with someone who aggressively demands citations for even the most basic of concepts. It shows that the primary motive is not to understand the topic at hand, but simply to argue.

P.S. Since you like to accuse people of violating the rules of debate, this isn't an "appeal to authority." It's a suggestion that you get a better background in antitrust law before arguing with people about it.


I even gave you the phrase to Google

Sure, I can Google. But if you're not talking out of your ass, then you've got specific legal gloss, case history, and decisions you can point to. You're not. Lazy or intellectually dishonest? Hard to tell.


> You have to simulate in your mind how the program’s going to work, and you have to have a complete grasp of how the various pieces of the program work together.

> You’ve got to have somebody who’s super smart. A great programmer thinks about the program on a constant basis, whether driving or eating. That method takes an incredible amount of mental energy.

> Before I sit down to code something, most of the instructions have already run through my head.

> Programming takes an incredible amount of energy, so most programmers are fairly young.

It may be a bit anachronistic to look at statements like these in the light of subsequent revolutions in software development such as XP and Agile. But I can't help thinking as I read this that he's doing it wrong. All the talk about using lots of mental energy to "have a complete grasp" of the program is fairly useless when faced with ever increasing complexity and code base size. Test Driven Development and clean code organization goes a long way towards breaking large problems down into easier to manager parts.


So instead of that 300 line function you'll need to have a complete grasp over thirty 10 line functions and how they connect to do what you want. Yes, that is perhaps an improvement, but his point still stands. Just because you isolate certain units does not mean that a mental model of the entire program flow is "fairly useless". The majority of the software that I use (that I know of) is developed outside of the Agile/XP "revolution", and it seems to be doing just fine.


Just be sure that each one of the 30 function is doing something interesting and obvious at the same time.

For example, if your language lacks a CopyTagedMatrix then you just write one, it gets the correct size, allocates the memory if necessary, copy the elements and return the new matrix. Now you can use it without remembering the implementation details, and it’s much clearer than the 10 lines that necessary to get the bounds and the fors to iterate over the matrix.

An extreme example is the GMP bignum library. You use each function as magic, and never read the implementation details. (Unless it’s in a very tight loop in a very hot part of the most important feature of your program, but I doubt that it’s good to use GMP in that place.)


Can someone have a complete grasp over the 15M lines of code in the Linux kernel?


I think his observation is relevant at any level of abstraction.

Gates had a complete sense in his head of how his program worked at the level of machine language. But not at a lower level of CPU microcode, or transistors. So even he, even then, was not claiming to have "total" understanding in the sense I think you mean.

Likewise if the Linux kernel (or your distributed web app) is organized into a reasonable set of "chunks", then it is possible to have usefully complete sense of how it works. The implication being that if it's not organized -- or organizable -- like that, you have a problem.


> Programs today get very fat; the enhancements tend to slow the program down because people put in special checks. When they want to add some feature, they’ll just stick in these checks without thinking about how they might slow the thing down. You have to have a programmer who knows the program inside out to guard against that.

This one struck me. Today the public methods of a class would check to make sure the inputs are valid. Back then the programmer would just know the location of every single caller and make sure the calls are always valid in the first place.

It's clear that the modern approach is a winner, especially with enterprise-size codebases, and really especially when developers churn every few years and nobody is around from when the code was originally written.

And of course Amazon is the ultimate success story of taking this idea to the extreme with a completely service-oriented architecture. Coding super defensively to minimal interfaces has been proven to work.


Keep in mind that Bill was talking about programming in assembly language for extremely resource-constrained systems there. Even today, if you're trying to write an extremely small/fast program/inner loop in assembly, you had better be relying on "dangerous assumptions" like that, or else the result is going to be worse than what a compiler could do.


Ideally you want your type system to enforce as many of these assumptions as possible since when it can you both speed and safety.


> This one struck me. Today the public methods of a class would check to make sure the inputs are valid. Back then the programmer would just know the location of every single caller and make sure the calls are always valid in the first place.

Personally, I would use the second option even today, with the caveat that there should be few, if any invalid inputs, what with type systems and OOP. If your method takes a string as a parameter, and can't handle every string that ever existed, then you really should be deserializing it into a data object of some kind, so that the compiler does the validation for you. If I'm developing some kind of API for public consumption, then I'll do input validation at the service boundary so that the consumer knows why it didn't work, but otherwise I'm a big proponent of the "just don't pass null" approach. Input validation on every function call bloats the code base, and is annoying to read through.


For them special cases, you've got CTR+ALT+DELETE. :-D


"INTERVIEWER:: Where do you see Microsoft in ten years?

GATES: Our goals are very simple. We’re going to create the software that puts a computer on every desk and in every home. I don’t know if that’ll take ten years–that’s not my expertise, guessing those exact time frames. Microsoft also wants to participate in helping to make sure those machines are good machines, building the system software into them, and then doing a lot of the important applications they’ll use."


This leads into a section that I find interesting, where he envisions every useful application on computers having to do with CD-ROMs and their vast storage space.

He doesn't mention anything at all about networking, which is a very true to the story of how the last 10 years of his tenure as a chief at Microsoft turned out.

Microsoft succeeded in what he stated, creating software for computers that almost every home used, indeed with software on CD-ROM. However, the applications he envisioned - catalogs, phone books, maps - actually turned out run on the Internet, not local storage. The attempts to distribute things like catalogs on CD-ROM turned out to be cumbersome and expensive, and too difficult to keep updated. He mentions the vast storage space of CDs, but the effective storage space of a computer connected to the Internet dwarfs the capabilities of any local storage.


> He doesn't mention anything at all about networking ...

It was a blind spot for Gates and Microsoft in the 1980s. He wasn't alone, of course. People like Marshall McCluhan had laid out a vision for it years before but Gates was really focused on software for devices and not the network. It wasn't until the late 1980s that Microsoft turned its attention to Novell which was making a killing in PC networking. (Gates recruited Jim Allchin from network company Banyan in 1990.) By the mid-1990s there wasn't much left of Novell, but Microsoft's interest and strategy there was purely driven by business and not vision. That was clear when they were clobbered again by the Internet.


"There’s a lot of talk about how large software companies find it difficult to attract talented people who can produce great software, because these mavericks are so independent that they want to work on their own. "

Still appropriate today!


"INTERVIEWER: Does accumulating experience through the years necessarily make programming easier?

GATES: No. I think after the first three or four years, it’s pretty cast in concrete whether you’re a good programmer or not. After a few years, you may know more about managing large projects and personalities, but after three or four years, it’s clear what you’re going to be."

Out of curiosity, is there any experienced programmer that would dispute or agree with this statement?


He appears to answer a slightly different question to the one that was asked, but I agree with what he said in his answer. To be a really good programmer, you need to have a particular kind of brain, and if you do, you will pick everything up very quickly. If you don't, you will always have a weak grasp of some concepts no matter how much you are taught.

There are university lecturers who agree that it's clear within the first couple of weeks whether someone has the aptitude to study CS. I remember a paper that was talked about a few years ago that described a "double hump" in aptitude, clearly separating the two groups. They wrote a very short test that was a good predictor of success. There's an article on Coding Horror about it: http://www.codinghorror.com/blog/2006/07/separating-programm...


That is a fascinating paper referred to in the blog post.

I teach introductory chemistry, and for a long time, I've felt that there are students who are just incapable of making relatively simple mental models of the concepts that we discuss. To give an example, there are students who don't have a mental model of what density is.

I imagine that most people here have a good grasp of density and each of us has some mental model where the 'matter' in dense materials is "tightly packed" when compared to less dense materials. This isn't a perfect mental model, but it's reasonably accurate.

The students who don't/can't make these models view the entire course as a collection of arbitrary formulas that make absolutely no sense and require painstaking memorization. As a rule, they do poorly, unless they are exceptionally good at memorizing.


I've felt that there are students who are just incapable of making relatively simple mental models of the concepts that we discuss.

Just because this seems to be the case, I would only ask that you don't let this become a limiting belief that you project on your "incapable" students.

I only say so because I know I've taken various courses over the years (HS or college) where that sort of analogous big-picture communication you're talking about didn't really click for a few months or even years. When one is overwhelmed - or at least feeling overwhelmed - with the volume of new fundamental concepts to be grasped, it's akin to having 1,000 jigsaw pieces dumped on your coffee table. Oh and by the way, we're going to apply time pressure and grade you on your ability to figure out the big picture.

In that case, learning can turn into a fear-based "tread water because I'm drowning" activity where rote memorization is the fallback tool to get the learner to shore alive. The outcome is no longer the joy of understanding and meditating on those analogies/models to get the fundamentals to click, but trying to avoid death (err, a failing grade).

It wasn't until I revisited those certain topics later without the time-sensitive pressure of exams/grades that the broader models for understanding would "click" and all those rotely memorized fundamentals would fall into place - which is an indescribable joy.

So while your incapable students may not seem to grasp the big picture now, it's likely you are still making a difference by implanting those fundamentals which will snap together later on in life.


You make a fair point and my generalization was certainly too sweeping.

One of the things that I tell my students is that sometimes, if they stick around long enough, things will make sense.

That being said, I'm still at a loss on how to help the students who aren't making mental models that (to me) seem relatively fundamental. I try to use visuals and videos when applicable and when possible, but I would use other approaches if I could think of something that would be helpful.

Finally, I think/speculate there is a continuum of skill with mental model building. When I was a postdoc, I worked with people who had deep mental models of statistical mechanics that I just didn't have. So...I know what it feels like, but I don't know how to fix it.


The Camel Has Two Humps was not successfully replicated. The authors presented http://crpit.com/confpapers/CRPITV78Bornat.pdf in 2008:

"Two years ago we appeared to have discovered an exciting and enigmatic new predictor of success in a first programming course. We now report that after six experiments, involving more than 500 students at six institutions in three countries, the predictive effect of our test has failed to live up to that early promise."


The predictive test wasn't replicated - that doesn't sat anything about the original observation.


Right, programming is just very, very hard to teach. From the paper:

> And even in the most encouraging of our results, we find a 50% success rate in those who don’t score C0 or CM2.


Do you have any idea if there's any way for someone to find and take that test?


If you look at the bottom of Dehnadi's page (http://www.eis.mdx.ac.uk/research/PhDArea/saeed/), there are links to the test script, answer sheet, marking guide etc.


Is it just me or are the answers to the first five questions wrong on the answer key?


That sentence stuck out for me too. In my 15 years of solo programming experience it's not true: some things have gotten easier as I've gotten better and tackled more challenging problems. I don't think I stopped improving as a programmer when I turned 19.

But in my 5 years of working in an office with other coders and maintaining other people's code, it seems true enough. Although it can vary by project, most salaried programmers seem to be either good because they are engaged and have made a habit of improvement, or poor-to-adequate because they are disengaged habitually disinterested to the point where experience doesn't even sink in.


> I don't think I stopped improving as a programmer when I turned 19.

But that's not what he said. He said that after 3 years it's going to be clear whether you're going to be good or not. That doesn't exclude learning.


In retrospect it looks that way. I have better tools at my disposal now, but I don't feel like I'm any better a programmer than I was five years ago - and in some ways I feel like a worse programmer than I was ten years ago.

But I don't know how fair those memories are. Presumably companies have a reason for paying more for experience...


"We’re no longer in the days where every program is super well crafted. But at the heart of the programs that make it to the top, you’ll find that the key internal code was done by a few people who really knew what they were doing."


"You’re seeing a lot more cases where people can afford to use C, instead of using assembly language"

God bless Intel Core i7 CPUs that allow me to use JIT'd, GC'd languages without a second thought.


My favorite quote: "the best way to prepare is to write programs, and to study great programs that other people have written. In my case, I went to the garbage cans at the Computer Science Center and I fished out listings of their operating system."


> The old rule used to be that a manager of a programmer was always a better programmer, and there were no what we called “technical inversions,” where a programmer works for somebody who doesn’t know how to program. We still follow that philosophy: At certain levels we’ve got business managers, but we don’t have non-programmers really managing programming projects.

Holy shit this is amazing


"In my case, I went to the garbage cans at the Computer Science Center and I fished out listings of their operating system."


Really wonder what others think about this line: "INTERVIEWER: What do you consider the most difficult part of computer programming? GATES: The hardest part is deciding what the algorithms are, and then simplifying them as much as you can."


Everyone knows the two hardest things are cache invalidation, naming things and off-by-one errors.


Does anyone know of any other Bill Gates interviews of this quality from, say, 1996? I suspect that one doesn't exist, but I'd love to hear what sort of lessons he learned from ten more years of running Microsoft as the company got larger and he moved "up the stack".

It's clear, though, he still paid attention to details:

1991: Product review for Excel Visual Basic - http://www.joelonsoftware.com/items/2006/06/16.html

2003: Critique of Windows usability - http://blog.seattlepi.com/microsoft/2008/06/24/full-text-an-... (the article emphasises the drama, calling it an "epic rant" and shows how Gates "isn’t immune to the frustrations of everyday computer users", but that's just some journalism bullshit. What's more interesting is that he still has a wicked attention to detail, and you still see that he's the same person that gave the interview in the original post.)


> What's more interesting is that he still has a wicked attention to detail, and you still see that he's the same person that gave the interview in the original post.)

I remember seeing this memo back when it originally came out. If I recall correctly, the problem was not actually fixed after Bill Gates complained about it. The replies to his email just pointed fingers at each other.

Thus, all the talk of Microsoft needing a Steve Jobs are off the mark. Microsoft already had a Steve Jobs -- he was named Bill Gates. For whatever reason, even he couldn't polish out the flaws. There's something in the culture of the organization that prevents attention to fit-and-finish.

For example: Bill Gates complains about the C:\Documents and Settings\billg\My Documents\My Pictures folder. This was in 2003. OK, so it got fixed in Vista. But why couldn't he get this fixed for XP? I doubt he was happy with it when he ran the XP betas.

My pet peeve: Explorer gives an uninformative "Try Again" dialog when I try to delete an open file. Why doesn't it tell me which program is holding the lock? Microsoft bought Sysinternals eight years ago! They just need to integrate the code from the handles utility, and add a "Switch to" or "End task" button to make it easier to close the program.


Agreed, but to become as good a consumer company as Apple, Microsoft would have needed a Steve Jobs (someone more dedicated to focus and aesthetics of form and function). It's not clear Gates wanted to be the same sort of consumer company. I'm sure he'd have taken it if he could've gotten it, but probably not at the expense of sales of site licenses to businesses.

Thanks for the pointer to handle! I've got the sysinternals suite but never bothered to explore much beyond Process Monitor.


> Thanks for the pointer to handle!

You're welcome, it's a useful tool when you just can't seem to delete a file. Although you're right -- it's called handle (singular), not handles (plural).

Actually, I think I do this all the time. I type handles, get an error message, and then think, "Maybe it's just handle." After all, multiple programs can have a file open at the same time, and the handle utility can actually list more than one program. So why isn't the name of the utility plural?!

Yet another fit-and-finish issue in a Microsoft product ...


Oh, for what it's worth, I italicised the name because it's a work, and I guess I do that for programs in addition to books, not to point out a correct spelling. Pointing out spelling mistakes and grammar errors on the internet seems needlessly rude. :)


"one of those people has to have the proven ability to really absorb a program. And when that lead person is uncertain about something, he or she should be able discuss it with even more experienced programmers."

I've always thought as the lead, you're supposed to be the most experienced programmer.


If experience was correlated with skills, maybe.


Interesting.

"...our present thinking is that we won’t have to increase the size of our development groups, because we’ll simply be making programs that sell in larger quantities. We can get a very large amount of software revenue and still keep the company not dramatically larger than what we have today."


And to think what Microsoft does today, with only 160 programmers and 100,722 salesmen.

Of course, 640k... I kid BillG.


This crowd either lacks a sense of humor or any tolerance of bad jokes. Or both.


Very interesting historical anecdotes.

The following shows his move to management: No, I don’t. I still help design algorithms and basic approaches, and sometimes I look at code. But since I worked on the IBM PC BASIC and the Model 100, I haven’t had a chance to actually create a program myself.

The following shows pre-agile thinking on development: I like to think the whole program through at a design level before I sit down and write any of the code. And once I write the code, I like to go back and rewrite it entirely one time.The most important part of writing a program is designing the data structures. The second most important part is breaking the various code pieces down. Until you really get in there and write it out, you don’t have the keenest sense of what the common subroutines should be. The really great programs I’ve written have all been ones that I have thought about for a huge amount of time before I ever wrote them. I wrote a BASIC interpreter for a minicomputer in high school. I made massive mistakes in that program, and then I got to look at some other BASIC interpreters. So by the time I sat down to do Microsoft BASIC in 1975, it wasn’t a question of whether I could write the program, but rather a question of whether I could squeeze it into 4K and make it super fast. I was on edge the whole time thinking, “Will this thing be fast enough? Will somebody come along and do it faster?”

It was a very different time!


> The following shows pre-agile thinking on development

Oh come on! When he says that "until you really get there and write it out..." shows a lot of agile thinking. Agile does not in any way exclude design before writing code! He also talks about rewriting (refactoring).

It wasn't that different of a time!


What struck me was the way in which Microsoft's coding hammer was so sophisticated that it viewed the design issue of accessing vast amounts of information by ordinary people as primarily a software problem (the interactive CD Rom) rather than a bandwidth problem. Though in fairness nobody was betting on the creation of the commercial internet at the time of the interview.

Gates and Microsoft saw the future and placed an early bet- much as they did with embedded and mobile Windows in the 1990's. In both cases, their timely early solution was made obsolescent when communication infrastructure scaled and bandwidth further commodified first over wire then over wireless.

In the first case, Microsoft underestimated their success. Computers in every home made providing cheap bandwidth profitable. With wireless, they simply had more than a decade of designs intended to minimize bandwidth consumption rather than actively encourage it. The idea of selling information about their customers [and Microsoft has always viewed users as customers rather than consumers though that could change] just wasn't consistent with their pre-goolified internet age ethics.


Since we are all adding quotes I'd like to add this one...

"No great programmer is sitting there saying, “I’m going to make a bunch of money,” or, “I’m going to sell a hundred thousand copies.” Because that kind of thought gives you no guidance about the problems."

Amen brother.

Too many times, especially here on HN, developers create something to make the big bucks. Usually they find little shortcuts for the things they want. The final product ends up sacrificing the mission as well as the quality.

I like the adage wise people give to enthusiastic people who want to "start something".

When starting something, be it a business, learning a skill, etc, make sure you're doing it for you.

Bill, I feel hit that on the head about BASIC. He saw a problem, and worked on it for himself. Others liked it and started using it. Reminds me on David Heinemeier Hansson's talk on why he built Ruby on Rails. It was a means to an end that ended up helping him and others in the process. http://bigthink.com/users/davidheinemeierhansson


I've been playing with CP/M recently (on an emulator). It strikes me that Microsoft had a very clear vision at this early stage: it was to port all of the popular (to businesses) large computer languages to micros. Hence we have not only BASIC, but very complete implementations of Fortran and COBOL for 8080/Z-80. I wonder if there is a quote from Bill Gates about this?


This is a great, great article. It gives you insight and makes you realize how intelligent he really is. Compared to Steve Jobs, I would bet Gates' IQ is a full 10-15 points higher. However, what Jobs lacked in intelligence he more than made up for with charisma and ability to relate to people. Highly enjoyable read.


Nice picture of a young and sprightly looking Bill at the top of the page... looks like it was created on MacPaint.


Nice, you just dissed the father of someone in this thread!

https://news.ycombinator.com/item?id=7187951


Wow, much coincidence...

I'll go and hide in the corner while all onlooking HN readers facepalm.


I thought the same thing - I think its a limitation of the digitization of the original image.


It was en-vogue at the time - I used to read MacWorld at the time, and saw lots of Mac-like representations of real-world things on it's pages.

Kind of like today where many images try to gain cachet by referencing iPhone/smartphone (look at our web-page-as-an-App).


This once again makes it clear that Bill Gates, although incredibly smart and successful, was never a visionary. Notice how he expects an immediate trend towards learning computers, that adapt to the user by learning from their behavior and a set of rules. Although an enticing idea this resulted in Microsoft Bob and probably not much else.

Also remarkable is the absence of any mention of the internet, although it was quite developed and capable already in 1986 - Microsoft famously missed the boat on this until the 'Cornell is wired' memo.

Finally, in every question about how Microsoft decides what to build, he is very vague and hardly ever talks about actual users. We now know that the main methodology employed by Microsoft was to simply rip off existing successful products and bundle them with Windows.


> This once again makes it clear that Bill Gates, although incredibly smart and successful, was never a visionary.

He became successful because he was a visionary.

He saw in 1978 that the future was in software, not in hardware (Apple went the other way and ended up in a hole for two decades because of that mistake). Then he saw the future of licensing software and bootstrapped Microsoft based on it.

Then he saw the importance of operating systems and built the company around it.

Then he saw the importance personal computing with Word and Excel.

You can say a lot of things about Bill Gates but saying he was not a visionary is absurd.


Another great interview, from the same time by the guy who created Pac Man: http://programmersatwork.wordpress.com/toru-iwatani-1986-pac...


INTERVIEWER: What do you consider your greatest achievement ever in programming?

I thought for sure it was going to be DONKEY.BAS:

http://www.youtube.com/watch?v=kymzTlqi1SY


None of the web developers I've interviewed in the States was able to implement a sorting algorithm of their choice. The one developer who got close to accomplishing this used min() on array_slice() in PHP.


That's not to say that all web developers can't do that. I can. Its not that tough.

I do honestly wonder what it is they teach in college. I've took 2 years (on and off) at a Community College and was able to do very well for myself. But, I have seen many "developers" that had impressive 4 year degrees on the resume and couldn't create an algorithm that removed duplicate records from a record set. Lol, some couldn't even tell me what their favorite text editor was ("All of them" ~Palin).

I'm not sure what to tell you, kolev.

For me, it has come to the point where asking trivial questions during interviews just doesn't work. You truly know what a dev can do when a dev develops.

In my shop, we work on classic asp, and asp.net web applications. I'm currently creating a package that will address the common building and troubleshooting scenarios pertaining to those frameworks. If the applicant cannot complete the task to satisfactory, then they don't belong here.

Good developers are out there. But you have to make a decision "Am I willing to pay a premium for them, or see if I can get just get lucky?"


When did I say "all"? I shared my personal experience interviewing web developers in Southern California. And, trust me, it's a good sample size as I've interviewed tens over the years! Also, the HN crowd consists of probably the best software engineers in the industry and that's why HN replaced a lot of aggregated blog feeds and the comments are especially valuable! Maybe I should've clarified that my observation certainly does not apply to the members of this community! :)


Around here, web development appears to be the new window-cleaning.

It is a shame.


A lot of the PHP coders (I purposely am using "coder") are actually designers who learned a bit of PHP here and there and just want to make more money.


Great idea! I would struggle moving from coding to design. I wonder if it as difficult a step going from designer to coder?


"We’re no longer in the days where every program is super well crafted. "

Wow. This statement is epic in the context of Microsoft today. Seems like everything they put out is feature rich but overly complicated.


I didn't realize that he had stopped personally programming so early.


I think that is part of the price you pay when company you launched turns so successful. I doubt Zuckerberg codes either.


Yes, imagine if you code part of a product, your company makes three branches and in the middle of the deadline to release a new version they call you to fix some bugs and merge the code into the branches.


I think he does sometimes. He was involved in the Poke app I think.

It sounds like a big tradeoff off to me. I'd really miss not coding.


Zuck says he hasn't coded since 2006


He didn't stop personally, just professionally. In the 1990s Gates would write articles about various personal programming projects -- the last one I can recall was in Java. (This was back when Java was the next big thing for MS, so certainly there was marketing aspect to this too.) IIRC it was about home automation.


I don't consider that very early, but this is purely my opinion obviously. He was around 29 years old when he stopped, and had been programming for 15 or 16 years (he programmed for the first eight +/- years of Microsoft's existence, a long time for the head of a successful company to be writing code).


I have no perspective on what it's like to run a company, and while it doesn't actually come as that big of a surprise to me, it also still does. This and Zuckerberg stopping around 2006 (according to the other comment) just come as a bit of a surprise to me, I guess.


INTERVIEWER: Where do the ideas for programs come from?

GATES: Well, there’s no formal process, that’s for sure. At Microsoft, there’s usually a brainstorming session at night or on the weekends. Everybody has a general idea, like, we want to do the world’s best word processor. ....

=> Well now, we have found a better way and companies are internally finding great ideas by doing hackathons!


For my money the more interesting 1986 PAW interview is linked on the sidebar, the one with Jaron Lanier, who I remember interviewing around then. Not a lot about the real work of programming, which Gates talks about like the automaton we know he was at that time, but many many insightful ideas about what programming means.


"I thought we could save a few bytes and tighten things up. We just tuned the program very, very carefully, and ended up with a 4K BASIC interpreter. When you know a program that well, you feel that nobody can look at the code and say, “There’s a better way to do this.” That feeling’s really nice"

so sad: from that to... Windows


Somewhat related - I highly recommend checking out the books Coders at Work and Founders at Work.


GATES: "... We knew what programmers wanted, because we’re programmers. So we wrote BASIC."

Nuff said.


"One way is to have small project teams, typically four or five people, and one of those people has to have the proven ability to really absorb a program.."

Genesis of the Program Manager @MS? I didn't think it was that old!


So quaint, talking about "programs" instead of "applications".


Wonder if anyone has access to those texts from John Norton he refers to?


I love this quote: "You’re seeing a lot more cases where people can afford to use C, instead of using assembly language.". Yes. Definitely 1986.


I stoppped reading when I got to this:

"If you’re a great programmer, you make all the routines depend on each other, so little mistakes can really hurt you."

I know things were different back then, and Bill did not have the luxury of building on the wisdom learned over decades of experience, but 'separation of concerns' and 'loose coupling' have been central to every design decision I've made. It's hard to conceive of how one would believe that a tightly coupled, brittle design was a good one.


In assembly language, a sub-procedure can save the state of all the registers it will use before using them and then set the state back when it is done (push register values on the stack, do work using those registers, and then pop the state back). This is the safe thing to do, but it wasn't the best way to write code back then. People would use unsafe procedures instead to reduce code size. With that said, I think Bill was talking about the fact that if the program doesn't repeat itself (by using procedures to abstract common idioms), then a mistake would be amplified since it would cause problems for all call sites.


Yeah, I could see interpreting this quote that way. However, if taken literally it implies a very intertwined world. It's probably not fair to to dissect this so much, however.


It was a different time. Intertwining your world made sense if it could squeeze a BASIC interpreter into 4k. Now my laptop has 8000000k of memory.


> but 'separation of concerns' and 'loose coupling' have been central to every design decision I've made.

I'm guessing no design decision you've made has ever involved assembly language. Bill Gates obviously knows a little bit more than you on this topic.


I'm not claiming to know more than Bill. To backtrack a bit, there are other concerns more primary than these in certain situations (possibly a lot of situations). However, Bill's comment was not about assembly, it was about what 'great' programmers do. Even still, as skittles pointed out, I might have misinterpreted Bill's statement anyway.


"You’re seeing a lot more cases where people can afford to use C, instead of using assembly language."


"The PDP operating system."

Unix?


I'm pretty sure he's talking RSX11, VMS, and VAXELN. I believe when Gates says "Everybody who wrote the PDP operating system influenced me" the everybody he's talking about is Dave Cutler and friends.

You know, the people he hired to write Windows NT.


Thanks Bill Gates for saying what I always thought...

"choose features to put into programs. To do that, you have to have a reasonable understanding of what’s easy and what’s not easy to do."

For those Product/UX/Tone Goons who can't actually build....

Also this gem

"and there were no what we called “technical inversions,” where a programmer works for somebody who doesn’t know how to program. "

"We still follow that philosophy: At certain levels we’ve got business managers, but we don’t have non-programmers really managing programming projects."

Bill knew it back in 86... we still do shit like that.


Today at Microsoft, not many managers actively write code. They had been doing it "back in the days" but not any more. Many don't even have Visual Studio installed in their primary work machine. I've seen tons working off of Surface with their only actively used apps being browser and Outlook. In fact, as higher up you go you are less likely to find anyone who has written any code in any reasonable period like past 2 years. It is not uncommon for a developer who is starting to feel that it's hard to keep up with coding asking for a management position aggressively. I would wager that more than 70% of the managers are ex-developers who no longer wants to deal with complexity or responsibility of writing code.


I don't think by "non-programmers" he meant "people who don't actively program".

I think he meant "people that don't even know how to program".

In this article bill gates says in the first line that he no longer writes code. But clearly he would not classify as a "non-programmer".

Today's managers can still provide valuable input (e.g. algorithm design - like Gates did when this article was written) even if they don't code day to day.

I'm a program manager at Microsoft so I don't actually write code. That said, I majored in CS and know my stuff pretty well. I don't consider myself a "non-programmer". That said, I try and increase my credibility among "my" devs by doing projects outside of work. This also helps me keep up with the times.


That's a good idea. I used to be under a project manager who had never written code and was not technical in the slightest, in a software company. It was a frustrating exercise, as he glazed over when you were explaining to things, and would also leave design meetings as he didn't understand what was going on. The questions were mainly "when will that be done by?" rather than trying to understand why there were difficulties.

Having said that, the company seemed to do alright, albeit with massive slippage on deadlines.


I agree but you have to remember MS was an engineering focused company. At apple it was often the other way around.


Are they still engineering focused, in the same vein as Google for example? Stories I read points to them being overheavy with middle management bureacracy.


Reading this article made me think that Bill Gates might be The Archibald Putt.


Great post! It's lengthy but I had a great time reading! Thanks for sharing!


This interview is part of a really great book. The site probably has all the interviews.

https://www.goodreads.com/book/show/2092682.Programmers_at_W...


Would you know/recommend any books from the early Microsoft era? I enjoyed Masters of Doom and would love to know if there is anything which goes into that much detail of the early days of Microsoft, actually any company would be interesting.


Three I'm familiar with, all recommended:

'Gates', Stephen Manes [1]

'Show Stopper!: The Breakneck Race to Create Windows NT and the Next Generation at Microsoft', G.P. Zachary [2]

'Microsoft in the Mirror', Karin Carter [3]

But I agree with the parent that 'Programmers At Work' is a great book.

[1] http://www.amazon.com/Gates-Microsofts-Reinvented-Industry-H...

[2] http://www.amazon.com/Show-Stopper-Breakneck-Generation-Micr...

[3] http://www.amazon.com/Microsoft-Mirror-Nineteen-Insiders-Exp...


Thank you very much, I'll have a look at those 3. Completely agree about the parent book, it does seem very interesting. Although potentially only available in the UK as an import.


I couldn't recommend "Show Stopper!" enough. Wonderful book!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: