Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Torvalds: Standards are paper. I use paper to wipe my butt every day. (redhat.com)
330 points by statictype on March 26, 2011 | hide | past | favorite | 175 comments


That's a deceptive title. The real point is more important (but less sensational):

Reality is what matters. When glibc changed memcpy, it created problems. Saying "not my problem" is irresponsible when it hurts users. 


That is something I don't like about Linus. He's so aggressive when he speaks; it's insulting. Explaining to the guy the good attitude to have toward backward compatibility and "not my problem" thing, is totally great. However, doing it in a so harsh tone is just plainly aggressive and useless. In fact, I'd argue that it is even less than useless as if someone talked to be like that, I'd be even less reticent to try to help. Add to this the fact that Linus, being well known and respected, simply demolished him. It's like if I kicked my little 12 years old cousin' ass in front of everyone to make him understand that we he did wasn't right. Good job Linus. I feel like posting on this forum now.


Growing a thick skin and dealing with a curt reply allows information to flow faster. All Linus is doing here is setting the bar. Doing it in flowery everyone-wins language is more time-consuming and the longer you communicate on the web, the more you realise that frequently it's just not worth the time drain. The attitude Linus is commenting on is something he's known for railing against for a very long time.

If you're trying to make a world-class product, you can't afford the time to wrap all comms with your developers in niceties - you need to define where you stand and what's expected in solid terms.

Also: given his thorough explanations that go along with the chiding, I'd much rather be on a forum that has that kind of poster.


No one's saying Linus should be -passive- about his point, just not hostile. Here is a shorted, edited version of his post without the offputting aggression:

"The user doesn't care.

Pushing the blame around doesn't help anybody. The only thing that helps is Fedora being helpful, not being obstinate.

Also, the fact is, that from a Q&A standpoint, a memcpy() that "just does the right thing" is simply _better_. Quoting standards is just stupid, when there's two simple choices: "it works" or "it doesn't work because bugs happen".

Reality is what matters. When glibc changed memcpy, it created problems. Saying "not my problem" is irresponsible when it hurts users.

And pointing fingers at Adobe and blaming them for creating bad software is _doubly_ irresponsible if you are then not willing to set a higher standard for your own project. And "not my problem" is not a higher standard.

So please just fix it.

The easy and technically nice solution is to just say "we'll alias memcpy to memmove - good software should never notice, and it helps bad software and a known problem"."

Not being an ass isn't difficult with practice, and it can even save time when communicating.


That's just it. He's not being particularly hostile. He's being blunt.

Go back and look at the thread. Linus isn't wading in with abuse. He's a solid part of the discussion from the start (I stopped counting his responses when I got to 7 in the first 50) and at least for those first 50 he's polite as everyone seems to demand him being. Explaining both his point about the function and about the user-doesn't-care view, repeatedly, politely - just as all the naysayers in this thread are demanding of him.

Heaven forfend Linus actually show frustration at having to repeat himself again and again, like an ordinary human would!

Molehill, meet mountain.


I don't think that's a good enough reason for interacting with the world as though you have aspergers. The technical excellence of linux may offset the problems caused by such an abrasive method of interpersonal relationships but that doesn't make it ok.


Ah, so it's not okay to act like Linus did, but it is okay to throw out Asperger's like it's an insult? Your morals around civility are curiously labile.

For the record, Linus's behaviour is not typical of Asperger's Syndrome.


You've got to understand that it is very hard for highly technically proficient people to be nice to everyone. For someone at Linus's level, a LOT of time is spent answering what seems like the worst kind of stupid questions.

This causes massive compassion fatigue. Take that into account, and it becomes clear that Linus is not as big an asshole as you think.


You've got to understand that it is very hard for highly technically proficient people to be nice to everyone.

Of all the arguments for being brusque on the web, this is the one I find most unappealing... as well as the least applicable for the matter at hand.

As far as I can tell, there's nothing about Linus' reply that depended on his admittedly high level of technical competence. In fact, it seemed he was saying "don't rely on being technically correct, consider the consequences"... in a way that got people's attention.

"I'm smart, so it's OK to an asshole" is about as shitty as an attitude as "not my problem, I'm just going by specs" - in fact, it's kind of similar. ... I bet all the up-voters thought they were the "highly technically proficient people"!


I doubt that it has anything to do with technical proficiency. People like Steve Wozniak, Alan Kay, Tim Berners-Lee don't have the reputation of being assholes.

In general, sites like Hackernews or Stackoverflow wouldn't work at all if smart people couldn't handle answering basic questions without being jerks.


For every non-asshole, you can find an asshole with technical proficiency.

For every Steve Wozniak there is a Steve Jobs.


You're incorrectly implying that Steve Jobs is technically proficient. He's an electronics technician and a cult leader, not a hacker.


I've known people compensate for their technical shortcomings by playing nice to everyone. Usually Engineers who happened to be Engineers by having a degree but don't have the passion for it and just in for the money. Asshole ones are the type that don't compromise just to be agreeable.


You may have posted this in the wrong subthread; we were talking about Steve Wozniak, who's toward the top of the the 99th percentile both for niceness and engineering passion, and Steve Jobs, who's such an asshole he cheated Woz on contracts they were doing together and whose engineering skills are limited to stuffing circuit boards.


"it is very hard for highly technically proficient people to be nice to everyone"

They being so damn smart does not give them the right to be condescending and insult people who can't keep up. So I'm not as smart as he is, does that make invaluable? Does that give him the right to call me stupid? What if I am doing my best? If my best try isn't really helping, then they should simply ignore me, not attack me. Nothing gives anybody the right to insult people.

It's the guiding principle here on HN: be civil. Just like others need to polish their technical abilities, being nice is something they should work on.


I remember this bug report. Since October 2010, Linus has posted a "me too" reply, then helped pinpoint the issue, then explained why shifting the blame to Adobe is wrong, then provided a workaround for everyone, then ran a benchmark proving the new memcpy code is crap, then helped people having issues with his workaround, and then finally went agressive towards Andre Robatino, whose previous comment is simply daft.


I read Linus' response as directed at people in the thread in general in addition to the previous poster. The "not my problem" attitude is pretty common in the world. In OSS, it seems even more important to be flexible in order to achieve something good for an end user.

How should Linus have addressed the other guy?


He wasn't insulting. Absence of ass kissing isn't an insult.

>In fact, I'd argue that it is even less than useless as if someone talked to be like that, I'd be even less reticent to try to help.

you sound like you're an ass with overgrown ego.

In general, one chooses whether to get things done or waste the time on gentle handling of ego of the people who can't control their own ego. It is a great boon for humanity that Linus has been choosing the former.


You can make your point without ass kissing or insulting/being overly aggressive.

See, in your post, the "You're an ass with overgrown ego" doesn't help your point at all; it's just provocative and seen as aggressive.

Try to remove that phrase from your post and you'll see that there's no difference at all. In fact, you would have wasted a little bit less of everyone's time. (See how easy it can be to be aggressive and provocative; even thought it wins nothing at the end.)

Anyway, if I understand your post, you are saying that either you gets things done or you're ass kissing the ego of people. First, that is plainly totally wrong. You can be productive without being an ass. Secondly, not being aggressive, is not the same thing as ass kissing.

It's like fighting to win a battle or a war. By being aggressive, you fight to win that small battle.. but if the contributor is pissed enough and decide to not work with some asses, everybody loses.


>You can make your point without ass kissing or insulting/being overly aggressive.

That exactly what Linus did.

>you are saying that either you gets things done or you're ass kissing the ego of people. First, that is plainly totally wrong. You can be productive without being an ass.

if you analyze the logic of your statement you'd find what it puts "not ass kissing the ego of people " as equivalent to "being an ass". We're back to square one.


This is the quote that's the most relevant and was unnecessary for getting his point across:

Quite frankly, I find your attitude to be annoying and downright stupid.

He was being insulting and unnecessarily aggressive.

Post #128 was stating logic and facts. Linus responded with logic and facts also, but before he did so, he stated the insult. Unnecessary and didn't add anything to the debate except emotional provocation.

If someone else doesn't get it, he can say, "This is my point, I hope it makes sense, if it doesn't, I don't know what else I can do to make you understand me." Something like that. And then just stop replying. Instead, he turns it into a pissing match.


This statement:

> you sound like you're an ass with overgrown ego.

stands in direct contradiction to this expressed desire:

> get things done [and not] waste ... time

because what you think the other person sounds like is not relevant.

This:

> Absence of ass kissing isn't an insult.

is useless for a different reason. If you are a manager, which Torvalds is, you need to be able to keep the people you're managing in a usable state. This is only very rarely done with brusqueness.


That is something I don't like about Linus. He's so aggressive when he speaks; it's insulting.

It might also be a european thing. We are more direct. Less talking around the issue at hand.


Perhaps not British though? A friend of mine (Australian) works in a company with offices in the UK and US selling education/health website software. In Aus and the US, business works as expected, you go in, make your case, get an answer yes/no and you're golden. In the UK they were finding that a great many places would agree to buy and then default at the next step.

They eventually learned to do the initial sell in a much more sideways manner as when confronted with 'do you want this or not?', British clients tended to just say yes to get them out of the office - even if they had zero intention of taking the product. Caused a lot of confusion until that was identified.


It depends. For example Swedes are very talkative, but Finns not the least bit. Finns are very direct and generally do not engage in small talk. Linus comes from Swedish speaking minority of Finland, so I guess that he is more talkative than an average Finn. Extensive stay abroad helps too. Still I wouldn't be surprised if he slightly hated twaddling :)


I absolutely agree that Linus is really aggressive in this example. But in this case, I think it's warranted because the general point he's making is so important: if Linux is to move forward as an OS, this attitude of "It's technically wrong, we don't care who we break" needs to die - it's helping nobody.


Actually, that's what I like about Linus. I find it humorous, but I also appreciate a "no bullshit", direct approach. He has it in spades, and he backed up his assertions with hard data.


It's true that I don't find myself particularly inclined to post on lkml — even though people were polite to me there when I was being kind of an ignorant asshole, I feel a bit intimidated by the brusqueness of the inner circle.

But I don't think I'm really in a good position to advise Linus on how to manage an open-source development community. Linux seems to be running at a high level of average competency despite the occasional hostility. Who am I to say that the hostility is dispensable? I'd like to believe it is, but I want to see an existence proof.


Both are bad, but I prefer Linus's up-front aggression to the passive aggressive tone of the person he is replying to.


A spec is a contract between programmers and in the long run, it's better (for users and programmers) to follow specs and expect others to follow them, rather than to let others break them willy-nilly and just bend over backwards to accomodate.

Oh, but I guess since this point requires actual thinking to understand, it's not in the realm of reality...


I understand your point and the purist programmer in me agrees but maintaining compatibility in the face of horribly broken software is how Microsoft managed to get on top with windows. If you've ever spent time reading The Old New Thing ( http://blogs.msdn.com/b/oldnewthing/ ) you'd realize the lengths MS went to to make sure that even when they improved things, they didn't break bad software, and how much that meant for people adopting the platform.

Seems to me kind of an important thing to recognize in this kind of discussion.


Thanks for the thoughtful response.

I sort of agree with you but I think what you're saying applies to a different context.

As the person who was rudely lambasted by Linus pointed out, Fedora is not there to "get on top." It shouldn't be trying to go by the same metrics that Windows goes by. And this isn't an issue of supporting legacy software; if Fedora makes a particular decision, Adobe can just update their software to follow suit.


> Fedora is not there to "get on top."

What's the goal of Fedora? To satisfy intellectual purity or to make sure software runs for it's end users?


Honestly, isn't it more for the former? I mean, nobody pays for Fedora. AFAIK people work on it for fun and/or because they personally like to use the distro and see it improved.

I know in some ways Fedora is a testing ground for Red Hat, and if that's the "purpose" of Fedora, then the reason for doing anything shouldn't be "user experience" but "being a testing ground for Red Hat." Which may lead to the same thing, but is not the same thing.

I mean, there really is no 11th Commandment that says "Software is all about the user." No, it's about whatever the programmer(s) making the software want it to be about.


And I agree with you, but Linus is still right that the user is more important. When a Fedora user sees that Fedora is broken, all the user knows is that Fedora is inferior to other things.

So in a way, it is about supporting legacy software.


But there is no moral commandment that "the user is more important." The goal of software is the goal of its developers; it's up to them. Someone very well could say, "this software is an exercise in intellectual purity." Or, "this software is to scratch the developers' particular itches, which may or may not be the same thing as satisfying most users."

Maybe Fedora in particular has a mission statement that says, "The user is most important," but I'm not aware of it. I'm certain that's not the case for the distro I use (Arch Linux) :-)


They have two quotes on their front page:

"Since its first version, in 2003, Red Hat's Fedora Linux has been the best place to track what's on the leading edge of Linux and open source software." — Jason Brooks, eweek.com

"Fedora has [...] released an amazingly rock-solid operating system." — Jack Wallen, TechRepublic.com

Are either of those quotes truly true if the developers are okay with breaking existing functionality?


I don't see it as "breaking existing functionality." I see it as "improving functionality." Albeit at the expense of Flash temporarily not working. Is it worthwile to pay that expense? For some software, yes, for some software, no. The point is that there is no solid rule here that is true for every software project ( in this case, every Linux distribution).

"Since its first version, in 2003, Red Hat's Fedora Linux has been the best place to track what's on the leading edge of Linux and open source software." — Jason Brooks, eweek.com

I personally would consider something like Arch Linux to be much more on the leading edge, and they would definitely not hesitate to improve their software at the expense of something possibly not working on somebody's system until they get the new update.

Case in point: Arch Linux was the first to switch to Python 3 from Python 2. A long time after that probably should have happened everywhere. And everybody whined a lot about how it was "too soon" and lots of software would break. Personally, I haven't had any problems with that transition on Arch Linux. Did it break things for some people, temporarily? Yes. Did it move forward the state of the Arch Linux operating system and, honestly, Python 3 adoption in general? Certainly yes.

If Linus had been on their forum yelling IT'S ALL ABOUT THE USER and telling people their attitude is stupid, maybe it wouldn't have happened. (Actually, he would have just gotten ostracized from the community.) In fact, they did get a huge amount of flak, but they stood up to it anyway.


So what, if the spec says "jump off a cliff", you'd do it?

Linus explains in-depth why the distinction between memcpy and memmove is arcane and useless nowadays, shows how glibc's over-engineering crap actually hurts performance instead of helping, and that being a tight-ass (ie. doing /more/ than the spec) just makes the situation worse for everyone.

Are you all seriously thinking that Linus doesn't know about performance? He's not Miguel, dammit...


Why not change the spec to work right?


That takes time. What do you do in the meantime?


So what, if the spec says "jump off a cliff", you'd do it?

This isn't relevant to the technical issues at hand; it's just an insult. Exactly what I was trying to criticize Linus for doing. (That was actually the point of my comment.)

Are you all seriously thinking that Linus doesn't know about performance?

No, that's completely unrelated to anything I said.

I'm not familiar with the history of glibc here. I didn't read anything except the single comment from Linus that was linked to. And I'm only addressing the comments he made in that post.

You have not addressed what I actually said, at all.

I'm thoroughtly disgusted with the way I've been treated here, especially the number of downvotes I've gotten. What happened to the hacker news ethic of voting up things you like, but only voting down when someone is rude or malicious? Unbelievably, I'm now actually at negative votes. I guess I ought to take this as a signal that my comment was a disservice to this community, and consider that maybe I'm not wanted here.

By the way, I'm not interested in continuing this thread.


> I'm thoroughtly disgusted with the way I've been treated here, especially the number of downvotes I've gotten.

The first sentence of your post that is being downvoted is okay; it contributes something to the conversation:

> A spec is a contract between programmers and in the long run, it's better (for users and programmers) to follow specs and expect others to follow them, rather than to let others break them willy-nilly and just bend over backwards to accomodate.

I believe your second sentence is the reason you're being downvoted:

> Oh, but I guess since this point requires actual thinking to understand, it's not in the realm of reality...

That sentence is sarcastic, insulting to people's intelligence, and contributes nothing to the conversation. If you lose that attitude, I think your contributions would be better received.


Wait a minute, I think you're misunderstanding the intent of the sentence you're criticizing.

I was parodying Linus.

I was making the point that that's exactly what you should not do.


Not really. Linus's comment was direct and to the point: your attitude does not make sense and I don't like it. And here's why my way is better:...

Your comment was a passive-aggressive declation of stupidity in the other party.

The two are not the same.


Your comment was a passive-aggressive declation of stupidity in the other party.

Well, that is exactly how I perceived Linus' comment, which is why my comment came across that way.

Clearly I was wrong, but I thought everyone would find it obvious that Linus' reply was pretty much unacceptable (and thus understand the point I was trying to make... which was not a passive-agressive declaration of stupidity in the other party, despite how it came across).

As least in the comment that was linked to, all he does is "yell" (all caps) at the person he's attacking, and although he presents an argument, it doesn't address the things his attackee actually said; I (personally) found it to be thoroughly unsatisfying in an intellectual/technical sense.


Uhm, have you actually read the whole thread? This Andre guy was being an obnoxious rigid pedantic idiot and Linus was trying to help all the other people to just the the damn thing to work. I think behavior like this Andre guy's is not criticized enough in the industry - it's toxic attitudes like that that poison the whole environment, and I applaud Linus for not tolerating bullshit, in no uncertain terms - even in the 'position of authority' he's in.


Uhm, have you actually read the whole thread?

No, I in fact have not; I just looked at Andre's last comment and Linus' flaming response which did not address his points and was very rude.

You may be right that Andre was being a pedantic, obnoxious, rigid idiot. I have no idea, I didn't read the thread. My only point is that "THE USER IS ALL THAT MATTERS" is not a valid software principle and that rather than communicate anything useful, Linus just yelled at people (in the particular commen that was linked to from HN).


I was baffled as to how you could make a dozen comments in this thread and be so consistently dead wrong, but with this confession it makes sense — you simply have no idea what you're talking about.

Even worse, you aren't interested in getting a clue by reading the damn material.


I never wanted to spend the time reading the 100+ comments in the original thread; while it would be interesting and enjoyable to do so, I have other things that are higher priority right now.

However, I was pissed off by one of Linus' comments (which, taken by itself was pretty ridiculous), and made a comment on HN about it.

My comment was sarcastic in a way that I think was misunderstood and made many people angry. Many insults were hurled my way, and I tried to defend myself, resulting in an ever-worsening spiral. It's pretty much just become a massive shitfest.

I wish people hadn't been so quick to attack myself and my character (yourself included). This kind of thing is definitely contributing to the decline of HN, which is something being discussed lately (as always, I guess). And, yes, I'm contributing to it too by even trying to defend myself, or maybe by defending myself too aggressively.

I can understand if you read the whole 100+ thread and then point out that what I'm talking about is completely unrelated. That's true, but I wasn't talking about the whole discussion; just picking at a bit of unreasonable rudeness on Linus's part that failed to make any intellectually worthwile point. And I stand by that: the individual comment I was picking at (i.e., the one linked from this HN post) was pretty ridiculous.


You didn't communicate that very well then. Remember that no one can see your expression when you're typing on the internet :)


Yeah, that's good advice. I mean, I thought what I was saying was obvious in context, but in essence, not everybody had the same context as I did.


>What happened to the hacker news ethic of voting up things you like, but only voting down when someone is rude or malicious?

That has never been the ethic here: http://news.ycombinator.net/item?id=117171 http://news.ycombinator.com/item?id=392347


Ah... thanks for correcting me, and I must say, I'm very surprised to find that I was wrong about this.

EDIT: Well, on second thought, notice that in both cases, although pg stated the he thought downvotes should be used to express disagreement, more upvotes were given to other commentors that though downvotes should only be used to boo rudeness (which is my personal opinion, as well). So I think maybe there is a division in the community on how this is supposed to be done.

Actually, the experience I was just whining about is case in point for why some people think downvotes should only be used to boo rudeness. I think a lot of people downvoted me just because there was a response to me that was strongly worded, even though it was (in my opinion) not really even relevant.


What happened to the hacker news ethic of voting up things you like, but only voting down when someone is rude or malicious

If you intend this as defense, you'd best go back and look at your posts. You've been plenty rude, and arguably at least a little malicious.


I intentionally parodied Linus in my comment about "ignoring reality" to demonstrate how one should not act, but I think some people missed that that was a parody. If you're talking about something else, please let me know, because I have not been "arguably malicious," and I'd be surprised if you can find an instance where I'd been rude (except perhaps in defense against someone else's rudeness.. if you're talking about stuff in my past history).


Seeing this purely as a spec v pragmatism issue isn't going to get us anywhere. There is just no way around looking at each case individually. There are specs that are pie in the sky, outdated, flawed compromises or reflections of vested interests. But there is also a huge amount of lock-in and lost productivity resulting from lack or disregard of specs (e.g IE6). I see no way to be principled on that one.


> Seeing this purely as a spec v pragmatism issue isn't going to get us anywhere.

That's because it's not.

The spec ( http://pubs.opengroup.org/onlinepubs/009695399/functions/mem... ) explicitly states, "If copying takes place between objects that overlap, the behavior is undefined."

Which means the real debate is between two technically valid interpretations of the spec: one that arbitrarily breaks existing software for no discernible benefit✻, and one that does not.

(✻ Unless, for ideological reasons, one believes that breaking said software is the benefit, in which case this is still a sneaky and passive-aggressive way to go about it.)

It's even fair to cast this particular debate as nonsense vs. pragmatism, because that's what it is.


"Passive aggressive" hits the nail on the head.

If someone truly believes you should break software that made bad assumptions about memcpy, as a matter of engineering principle, then just stick this at the top of memcpy and be done with it:

  if((src <= dest && src+len >= dest) || (dest <= src && dest+len >= src))
    abort(); /* valid - spec says behaviour is undefined */


DAMN LIBC DEVS BROKE MY ABUTTING memcpyS!

;)


Thanks. This is an insightful comment and, yes, I see your point and agree with it.


This ideal is completely irrelevant here - the change Linus and many people are asking for here (aliasing memcpy to memmove) explicitly does <i>not</i> violate the spec; all the spec says is that memcpy is not <i>guaranteed</i> to work when the memory segments overlap.

The conflict here is between going above and beyond the call of the standards and in so doing encouraging expectations of that extra functionality in all software, or implementing only the bare minimum specified by the standards and in so doing breaking software as actually written. Given that this change involved literally checking for the relation between the source and destination address, and copying upward in one case and downward in the other (in so doing, implementing a function which is <i>also</i> required in the standard) adding this functionality to the de facto standard does not significantly increase the barrier to entry for new developers.

I think the ideal solution would be a note in the standard hinting that memcpy can be implemented with memmove, or even better, a deprecation of memcpy in favor of memmove with memcpy being in the interim aliased to memmove, but standards changes are always a pain.


Contracts are written and signed. Specs should wikied.


These fights between "specs lawyers" and "pragmatic" programmers are painfully frequent. Reminds me of the ext4 controversy on the out-of-order metadata synchronization.

Linus himself has already expressed his point of view many times, for example here (http://www.h-online.com/open/features/GCC-We-make-free-softw...):

> Torvalds has his own take on this, which is that "For some reason, compiler developers seem to be far enough removed from 'real life' that they have a tendency to talk in terms of 'this is what the spec says' rather than 'this is a problem'."

I couldn't agree more.


Agreed. We have to produce a lot of paper doc in my domain, and in our bug base, there's a section dedicated to spec bugs. Just cos it's in the spec doesn't make it right.


And the spec may be right, but it's still the wrong thing to do.

Applications are complex, fragile things. If you care about keeping them going over time, you wind up kowtowing to a lot of workarounds. Welcome to the software industry.

The attitude of "I can change library Z, be spec compliant and still sleep at night even though the change took half an industry down" might be /correct/ but it is not going to retain customers. Do it enough, and you'll see users flee.


"Do it enough, and you'll see users flee."

Unless there's no where else for them to go. :)


IBM was a master of this: Get a customer cornered and then bleed them. There are modern versions of this, too (Oracle springs to mind).

"Nice business you have here. It 'ud be a shame if something were to . . . /happen/ to it? Right?"

"Right, boss."

"Shaddap, I ain't talkin' to youse."

It's a successful, ancient business model.


You can be a hacker or a lawyer, not both.


"Bullshit. You clearly don't know what you're talking about."

Tells it like he sees it. I'm not going to argue with the man. He makes some good points. The user really doesn't care what the underlying changes are as long as everything continues to work, in this case it does not. Reading back through the comments its clear Linus makes great suggestions which are left unheard.

I like comment #222 https://bugzilla.redhat.com/show_bug.cgi?id=638477#c222

"Breaking existing binaries with a shared library update IS A BAD THING. How hard can that be to understand? If you break binary compatibility, you need to update the library major version number."


This hit the product I work on and I agree 100%. We had to scramble for days to fix this. (Find a test case, diagnose it, make the fix, release a patch.) It was a pain in the ass. At least the solution was a 1 liner.


explain how you fix any bug or change any implementation by still respecting your side of a contractual interface without breaking said "binary compatibility" when using this broken definition of "binary compatibility"

exposure of a random proprietary software bug written by morons does not imply breakage of "binary compatibility". some free software developers have no time to test their standard compliant library against all crappy proprietary software ever written under the sun. if that makes you sad, at least you are free to modify the fucking library for your own little use case, or even to fork the library and maintain your fork with an explicit policy of never ever changing externally observable behaviour (probably meaning you won't make any change to said library (other than in comments) if you REALLY apply this rule...), which happily glibc never had and hopefully never will have.


Upstream glibc bug here:

http://sources.redhat.com/bugzilla/show_bug.cgi?id=12518

Assigned to the inimitable Ulrich Drepper, who comments:

"The existence of code written by people who should never have been allowed to touch a keyboard cannot be allowed to prevent a correct implementation."


I wanted to note that Ulrich Drepper has proposed a way around the issue, and clearly said at the end "I'm happy to entertain a patch to this effect."

When I first read the parent comment it sounded as though Ulrich immediately shot it down, which he did not. (Note I'm not blaming the OP of the parent comment for this, I am just trying to help future readers from making the same mistake I did)

While he is not immediately agreeing with what Linus proposed, at least it is progress.


"The existence of code written by people who should never have been allowed to touch a keyboard cannot be allowed to prevent a correct implementation."

Sounds like Drepper needs to read the standard C library spec (or else there's a human-language barrier at work here). It is not incorrect to map memcpy() to memmove(). ("Be conservative in what you send, and liberal in what you accept." -- Postel)

Can anyone imagine this debate happening inside Microsoft? This is why they win.


sounds like you misunderstood what Ulrich wrote


Could be. What's your interpretation?

Bear in mind that I'm an experienced developer, and consequently am reluctant to call someone a "moron" who should "not be allowed to touch a keyboard" on the basis of the occasional bug in their code.

Yes, confusing memcpy() and memmove() is a novice mistake, but I've done dumber things, and it'd be nice if my users didn't have to pay the price. If the OS or CRTL can save them from my error by rendering the error harmless without any real downside, then it should do so.


I think it's awesome how focused someone who works on the very lowest level of the OS remains so focused on users, and their experience.


Seeing the forest instead of the trees is a common trait of successful people.


You might also enjoy this "USE KDE!" rant from a bunch of years back.

http://tech.slashdot.org/story/05/12/13/1340215/Torvalds-Say...


I wonder what he thinks 5 years later? It seems like Gnome is the default on most distros. Are the attitudes he describes still influencing Gnome?



Linus is a patch shepherd. It's his job to see the bigger picture.


Going down the thread I found this little gem in one of Linus' replies:

Why? Because _users_ are the only thing that makes software useful. Software isn't useful on its own. You cannot say "this is the right thing to do" unless you take users into account.


I find it interesting that Linus seems to have originally come across this particular bug report as a user; he, too, was having a problem with Flash player:

https://bugzilla.redhat.com/show_bug.cgi?id=638477#c8

"I see this as well. Sounds like clipping or some really bad sample rate frequency conversion."

(It's only later that it becomes clear that this is due to a glibc "regression".)


For the long history of how some glibc developers are enjoying to make more problems to the users see also:

http://www.tuxradar.com/content/debian-ditches-glibc-eglibc

I consider all this a real attitude problem.


The first bit of evidence linked to by that article is glibc's maintainer complaining that he's being asked to improve the strfry() function, after a detailed technical analysis. If you don't know what strfry() is, that response does look unreasonable. But he's right! strfry() is, literally, a joke function.


Yet, it's there. I'd understand the reaction completely if it was an "It's broken. Fix it." bug report.

But this one is "It's broken, this is how, this is data confirming it, this is a patch and that's how it's fixing the issue." The work is done - review, apply, release. Or if it's considered such a joke that it's not worth fixing, just remove it completely.


It's only broken if you don't get the joke. Asking someone to review and merge a patch for strfry() is, if not unreasonable, then at least something that's reasonable to say "no" to.


He didn't just say no though.

When presented with a model bug report (includes good description, test cases, and even a patch) he felt the need to change the function, but instead of using the supplied patch he rewrites it himself and commits it without testing it. It is still broken. When this is pointed out he gets angry and complains about people wasting his time, when it was he that decided to waste his own time rewriting it his own way and he was the one that still got it wrong.

Everyone would have been better off if he had taken the sensible path of "read bug report", "confirm bug", "test patch", "commit patch" instead of arguing and rewriting things out of spite.


http://www.google.com/codesearch?hl=en&start=30&sa=N...

Seems to say it isn't used. I also contest that it _is_ broken. I cannot find a man page that promises that it uses a uniform distribution; one could even argue that such a function should not use an uniform distribution. For example, randomizing the non-uniformity of the distribution depending on the phase of the moon would, IMO, be a good idea for this function.


strfry's man page gives no indication that it's a joke function. Contrast with memfrob(3).


From the man page:

The function below addresses the perennial programming quandary: “How do I take good data in string form and painlessly turn it into garbage?”.

To make the argument that strfry isn't a joke, you might find a real program that uses it productively. I'd be interested in seeing if one exists.


Hm. My man page doesn't have that passage:

NAME

strfry - randomize a string

[snip]

DESCRIPTION

The strfry() function randomizes the contents of string by using rand(3) to randomly swap characters in the string. The result is an anagram of string.

[snip]

COLOPHON

This page is part of release 3.32 of the Linux man-pages project. [snip]

What version of the man-pages package do you have installed?


Sorry, mine came from gnu.org and probably isn't really the man page (I misread a Google result).

In any case, I'm not wrong about strfry being a gag function, and it looks (if you read upthread) that my hunch was right that nobody uses it in real code.


It is a joke function. I really wish that the supplied documentation let potential users in on the joke. :)

(Oh, pardon. I didn't catch that you might have thought that I thought that strfry was anything but a joke. My take on the situation is that the documentation needs patched, rather than the strfry code. :) )


Kernighan & Pike:

    The ANSI C standard defines two functions: memcpy, which
    is fast but might overwrite memory if source and destination
    overlap; and memmove, which might be slower but will always
    be correct. The burden of choosing correctness over speed
    should not be placed upon the programmer; there should be
    only one function. Pretend there is, and always use memmove.
Shucks.

Also, static linking.


Linus makes fine points, but it's asinine to say that standards don't matter. His entire contribution to the world could not exist without them.


I couldn't agree more with this.

I may be overdoing it a bit here, but personally I don't agree with Torvald's p.o.v. at all. While user experience is obviously important, allowing mistakes like this to go unchallenged by simply providing a work-around just encourages people to write broken software. Specs exist for a reason. Ignoring them is not it.

I do this in my own software as well. If a user files a bug because input X fails to process as expected -- and it turns out his/her input does not meet the spec -- I do not supply a fix. Even if his/her supplied input is widely used in 'the wild'. While this alienates a considerable user base, I refuse to be part of the problem.


Myself I would go for a compromise. Flash's behaviour (using memcpy when then mean memmove) is definitely faulty, and it's not an obscure C undefined behavior either, I'm sure any C tutorial worth mentioning has "if you want to copy overlapping regions of memory, use memmove" somewhere in the middle. I'd just fill a bug on adobe's bugtracker, tell them "we'll keep the current for X months, but then we'll switch to a new implementation, so if it still breaks it's up to you". With X being a reasonable enough lapse of time.

By the way, if flash was open source it would probably be a one line patch to fix the issue, but it's not, and I wouldn't blame the fedora/glibc devs for that. Breaking things because "you can" is no good policy, but it doesn't mean you have to be backward compatible for ever, that's what being realistic is about.


I can see the moral hazard side of things; but Linus does have a point.

There has to be great benefit from changing the implementation of memcpy for it to be worth it. And broken flash is an obvious negative. And across the said thread, there is argument that the new implementation might not be the best as well. So, with two negatives and no obvious positives why do it at all?


If early web browsers has stuck to the spec and rejected all "bad" inputs, then web probably would have been a better place today. Sometimes the user's interests are in the future not in the now.


Or maybe it would be as popular as gopher, and we'd all still be ftp'ing shit around.


um, good luck with that.


Do you at least attempt to show the customer the error of their ways?


Of course. If the supplier of their input is a third party, I will even go so far as to file the appropriate bug reports with said third party for them.


Scoped properly, he's entirely correct. The user doesn't care about standards. The user cares that STUFF WORKS. Standards serve a purpose - but standards are guidelines. Adobe & Redhat had no contractual obligation I assume, so really it's up to either of them to fix the problem - and in fighthing about who did what wrong is pointless if a couple lines of code ora few hours of work are all it will take to make things work.

In this particular situation, the standards were not helping get things up and running, right?


Are you referring to POSIX? Because if so, you should know that it is a shining example of a worthless standard and that most unix programs rely on unspecified behavior far more than this easy-to-make flash bug.


A spec is a contract between programmers, and in the long run, following specs is good for users. Letting Adobe ignore specs and then bending over backwards to accomodate them doesn't seem like a good policy.

And even if following the spec weren't better for users in the long term, it is technologically superior, and Fedora is free to choose the technologically superior alternative over pleasing the masses if they want. Despite what Linus suggests, there is no Ten Commandments of software that dictate the rules here.

I guess Linus could have used this opportunity to convince a naysayer like me why I'm wrong, but instead he was just insulting and didn't address the real points at all.

(And FYI, I'm a diehard Linux user)


So, glibc changes, breaks adobe, and it is adobe's fault? Even if adobe was relying on bad behavior, it was glibc's bad behavior it was depending on.

These are the kinds of problems that finally convinced me to move from Linux to OS X. I get the Unix without the egos (just the fanbois, but I can usually ignore them ;).


So, glibc changes, breaks adobe, and it is adobe's fault?

This reminds me of Joel Spolsky's article, "How Microsoft Won the API War:"

"There are two opposing forces inside Microsoft, which I will refer to, somewhat tongue-in-cheek, as The Raymond Chen Camp and The MSDN Magazine Camp.

Raymond Chen is a developer on the Windows team at Microsoft. He's been there since 1992, and his weblog The Old New Thing is chock-full of detailed technical stories about why certain things are the way they are in Windows, even silly things, which turn out to have very good reasons. . .

The other camp is what I'm going to call the MSDN Magazine camp, which I will name after the developer's magazine full of exciting articles about all the different ways you can shoot yourself in the foot by using esoteric combinations of Microsoft products in your own software."

He thinks the MSDN camp won, and that's bad, and he contrasts with, say, Apple in historical times:

"A lot of developers and engineers don't agree with this way of working. If the application did something bad, or relied on some undocumented behavior, they think, it should just break when the OS gets upgraded. The developers of the Macintosh OS at Apple have always been in this camp. It's why so few applications from the early days of the Macintosh still work. For example, a lot of developers used to try to make their Macintosh applications run faster by copying pointers out of the jump table and calling them directly instead of using the interrupt feature of the processor like they were supposed to. Even though somewhere in Inside Macintosh, Apple's official Bible of Macintosh programming, there was a tech note saying "you can't do this," they did it, and it worked, and their programs ran faster... until the next version of the OS came out and they didn't run at all. If the company that made the application went out of business (and most of them did), well, tough luck, bubby.

To contrast, I've got DOS applications that I wrote in 1983 for the very original IBM PC that still run flawlessly, thanks to the Raymond Chen Camp at Microsoft."

The thing is, Apple is still mostly like that. Want the new hotness? Upgrade. I've been using OS X since 2004 and have trouble remembering all the stuff that broke because of OS updates (NetNewsWire and printing were particularly common). I don't know if it's because of the MSDN camp in Apple, but I do find it ironic that you cite things breaking as a reason to move to Apple. There are plenty of them, but I'm not sure that's one.


I've got Mac apps from the late '80s that ran fine on the last non-Unix Mac OS, and ran fine in the Classic environment on OS X up until that was finally dropped.

Apple did in fact do a lot of bending over backwards for compatibility, at least when it was a major developer breaking the rules. System 7 had special code in the memory manager for Microsoft applications to make them work with the 32-bit memory manager and virtual memory.

That said, Microsoft does do an outstanding job in this area. I remember when Win98 was coming out, we were not in the beta program at work. We got a call from Microsoft telling us that a VxD of ours was not working on Win98, telling us what assumption it was making that was no longer valid, and inviting us into the beta program. We were not a large, well-known company. That was pretty cool.


Oddly enough, MacOS had better back-compatibility with the old applications than the newer ones. It seemed like pretty much every app broke at least once between Systems 7 and 9, even while the 1985 apps ran fine. (Perhaps the developers were being trickier in how they abused the OS.)

Incidentally, I think this figured into Apple's thinking regarding limited back-compat. They had already unintentionally forced a number of application upgrades, why not do something positive like move everyone to a new OS/CPU/API in the process?


>System 7 had special code in the memory manager for Microsoft applications to make them work with the 32-bit memory manager and virtual memory.

Yea, MultiFinder had special code to load old MS applications like Excel 1.5 below the 1MB line because they were only 20-bit clean.


"I do find it ironic that you cite things breaking as a reason to move to Apple."

To be clear, things breaking wasn't what I was referring to. Generally, things worked very well on Linux (except [at the time] suspending and wifi on my laptop) and I know they've gotten better.

On the other hand, there was far too much focus on software for the developer's sake and not software for the user's sake for my tastes (even as a developer). I'm fine with the people who are developing the (almost exclusively) open source software developing it for themselves, but it was too much headache for me, so I did the proverbial "voted with my wallet".

I still use Linux a lot, it is the platform my startup is deploying on, and it amazes me every time I ponder the changes to the entire community since kernel version 0.9'ish and slackware when I first started using Linux. I actually trust the Linux community far more than Apple to keep things working over a longer-term timeline (which is great for servers, but I don't care much about on my desktop).


Adobe had a bug which worked because of a Bug in memcpy(), the bug was fixed and Adobe's code broke.

win95 had a similar one with the game Civilisation, they actually put code into win95 to detect the game and change the way the OS worked - doesn't sound like a good solution


Read the spec for memcpy: memcpy's behavior on overlapping memory regions is undefined - not "required to corrupt memory", but undefined. Changing memcpy from not breaking on overlapping memory regions to breaking does not fix any bugs.

Adobe should not rely on non-spec-defined behavior, but there's no reason why glibc <i>should</i> be making this change without making a major version number change.


What? The whole raison d'être of Windows 95 was backwards-compatibility with primordial PC junk. The entire thing was a hack from top-to-bottom, far beyond a workaround for a particular game.

Also, to quote Linus:

"And what was the point of making [an OS] again? Was it to teach everybody a lesson, or was it to give the user a nice experience?"


I guess you are referring to this:

> I first heard about this from one of the developers of the hit game SimCity, who told me that there was a critical bug in his application: it used memory right after freeing it [...] the Windows developers, who disassembled SimCity, stepped through it in a debugger, found the bug, and added special code that checked if SimCity was running, and if it did, ran the memory allocator in a special mode in which you could still use memory after freeing it.

(from http://www.joelonsoftware.com/articles/APIWar.html )


it was glibc's bad behavior it was depending on

Hmm. Well, I was assuming that glibc was following the spec all along but just changed some implementation detail that mattered because Adobe wasn't following the spec all along.

I think there's an argument to be made that the way an API is implemented is an implicit contract that ought to be upheld. But that's not the arugment I see being made.

Anyway, this level of detail is below the scope of the "specs vs pragmatism" debate that's going on.


I think there's an argument to be made that the way an API is implemented is an implicit contract that ought to be upheld.

That sounds wrong to me. If your consumers have to rely on assumptions beyond what's provided in the contract of the API, your API is leaky and/or broken.

[edit] I understand the pragmatism necessary in the case of the glibc issue, but to clarify my point I disagree with the general assertion I'm quoting.


Right, the memcpy() API is leaky because it leaves some things unspecified. You can deduce the implementation by providing various inputs to it that have "undefined behavior". This is a common problem in C, and there are usually functions that avoid it (Linus recommends memmove).


The worst part is that the change is only a win on Core i* processors. It makes things slower on Atom.


> Even if adobe was relying on bad behavior, it was glibc's bad behavior it was depending on.

Wasn't it implementation details it was relying on? Isn't NOT relying on implementation details pretty much the basis of good software engineering?


Quite frankly, I find your attitude to be annoying and downright stupid.

How hard can it be to understand the following simple sentence:

   THE USER DOESN'T CARE.


The trouble with this whole argument is that while the user may not care today, things like standards and defined interfaces are all about keeping things working tomorrow. Your user will surely be just as upset at something breaking tomorrow as they are today, and it's increasingly likely that such breakages will (a) occur and (b) cost more to fix, the longer you implicitly support deviations from the standards.


In this particular case, however, simply making memcpy() handle overlapping moves correctly would not break anything. Well, I suppose there's a theoretical possibility that someone is counting on the old behavior in the backwards-overlapping case, but that would be bizarre; surely this is the kind of code that should get broken, if any of it even exists.

If memcpy() had been fixed 30 years ago to do overlapping moves correctly, as it could and should have been, that would have been the end of it; we would not be having this conversation.


How hard can it be for Adobe to write standards conforming code?

The user doesn't care, but professional software developers, which I assumes Adobe's developers are, should make it a priority to follow the relevant standards.


Pretty hard. C is a minefield of undefined behavior. Integer overflow is considered undefined behavior. Which means its perfectly valid to call abort(), wipe your hard drive, then light your system on fire. Or, have the number quietly wrap around.

Oh, and its perfectly fine for me to change the behavior from one to the other, or even have a lookup table of random responses to undefined behavior. Because, like, you're not following the standard. And its so easy.


> C is a minefield of undefined behavior. Integer overflow is considered undefined behavior.

Only for signed integers. Unsigned integers are defined to wrap around.


Yea, I remember the joke about gcc #pragma once causing something like rogue to run.


For what it's worth, I agree. Every time I hear about problems that arise from programming in C, with its undefined behaviour etc., I think to myself, there must be a better way to do it. But I don't know of any way that wouldn't involve scrapping 90%+ of software we use every day.


Precisely.

It also indicates that Adobe don't run their code through valgrind, which would have picked this problem up.

Considering that flash is (a) security critical and (b) often full of security bugs, you'd think they might run valgrind over it once in a while.

Entirely Adobe's fault this one.

BTW with Firefox 4 the need for flash has virtually gone. All the popular video sites can play most of their videos using the native video support in the browser.


I was under the impression that Valgrind didn't play nice with virtual machines.


Generally no, out of the box it doesn't, things like JITs and GCs can confuse it, however it's got a bunch of flags and config options and what not to allow you to use it.


It's always a trade-off, but in this case the cost is minimal. If your program is limited by the speed of memmove/memcpy(), and you absolutely must copy (rather than alias, or whatever), you probably want to use a 128-bit aligned, widely unrolled SSE copy or something like that. That is, take advantage of the constraints of your precise situation. You can't do that with an API as generic as memcpy().


In newer versions of glibc, memcpy() does take advantage of SIMD instructions if they are avaialble. It works something like this:

The memcpy() function is marked specially in the ELF file. On the first invocation, the dynamic linker actually calls the function and then takes the return value and treats it as a function pointer. This function pointer is then used for linking, so that subsequent invocations will call that pointer instead.

The memcpy() function in glibc then simply checks which SIMD extensions are available and returns a pointer to the appropriate real memcpy to use.


Yep, except it still needs to check the alignment of the pointers passed into the function to see if it can use the aligned mov instructions. If you're that stuck for speed, you'll want to make sure all your buffers are already aligned, and then call a function that doesn't do any checking.


You gotta _love_ Linus' writings.



I actually find it to be really bad, and I'm not the only one -http://www.informit.com/guides/content.aspx?g=cplusplus&...


While I agree with his focus on users, he has passed up a valuable coaching opportunity and probably antagonized a whole bunch of developers. He turned a potential coaching opportunity into a pissing match. Ridiculing volunteers on a open source project is inappropriate, irrespective of whether the opinion of that person was right or wrong.


If you look at the history, the comments at the beginning were much less abrasive. It's only after the discussion continued for a long time that he started being more aggressive in his comments.


But I think it's wise, in all debates, to maintain that civility. At the end of the day, which comments are getting the attention?

I think he's right, I think it's important to defend what he thinks is right, and I understand how it can be frustrating, but we all just have to have the fortitude keep the flames in check.


At the end of the day, which comments are getting the attention?

The ones with content. In the sample quotes in the article, Linux is attacking ideas and attitudes, not people.


he has passed up a valuable coaching opportunity

???

In the comment I read, he goes into detail about why it's wrong.

His next comment that starts "Bullshit." goes into 500 words of coaching.

He is backing up his arguments with developer-level people here, not newbie coders who are still figuring out how an array works.


I do wonder on what basis I must _love_ his writings.

I'm not a Linux guy, so I am out of the loop. What makes these writings so lovable?


I would also include that he's furiously pragmatic. Which stands in contrast to other Free software personalities, who are often furiously idealistic.


The nature of the Father of Linux being, in essence, an enraged (often insightful) crusader is just a point of pride for a lot of people who know about him and like Linux.


He's straightforward.


And smart. This is an important ingredient.


As the originator of the OS and still working at a low level, it's nice to see him (A) care about the user experience all these year later and (B) cut through other people's bullshit the way a lot of us wish we could.


Linus: “I (obviously) always compile my own kernels”



Good, although that's also an argument for ditching C and using a decent high level language instead.


Although I like the attitude on some level and in this case he is obviously right, the problem with it is that our (and others I know) programmers feel like cowboys after reading HN posts like this. 'Ruby master bla says I don't need comments and documentation man!' 'Testing is sooo 00s; we use our brains!' 'Specs and standards are paper, read Torvalds; he uses that in his watercloset! Yeah he isn't talking about specs, but those are paper too!'.


Specs are great: Java and Scala both have specs. Not having a spec can be OK if the language designer doesn't change things too often, and announces it well in advance when they do, e.g. Python.

What's worst is, say, when the language "Product Manager" leads a standardization committee that hasn't done anything for 7 years, yet often changes the language implementation overnight so code samples in other people's books and blogs no longer work.


What language is that?


Is this submission a deliberate attempt to make Linus look bad? The title is not at all what his comment amounts to; it could be removed from comment post without changing the meaning. I don't doubt you can find a multitude of quotes of Linus in support of various, official or pragmatic, standards. He was just polemizing.


It wasn't an attempt to make him look bad. I think many/most people here would agree with the sentiment about standards vs actual usage. And that line was delivered with classic Linus style - blunt and with the subtlety of a chainsaw. That's why I chose it for the title.


Keep in mind the specs were written to standardize the behaviour that existed at the time: there were implementations of memcpy that worked a bit faster because they didn't have to test for overlapping regions. That's why there's a separate memmove function in the first place.

So people saying that memcpy should work like memmove are really the ones advocating for changing a spec that is currently quite explicit.

Enabling this type invalid behavior from app code is a classic example of introducing dependencies on undocumented behavior. Over time these dependencies accumulate in complex systems with the resulting effect of increasing software incompatibilities, not reducing them.


The upstream glibc bug is worth reading too. My favorite bit 'Everyone is interested in using the code idiots write'.


So, did anyone try out the workaround in comment 55? I tried it just for fun, but Flash already doesn't crash for me, so I didn't see any difference.


I don't use Flash on Linux.

But! what's interesting is how much I dislike the first sentence of comment 133.


Linus himself has said that he has an ego the size of a small planet.


Ego, properly applied, changes the world.


Sometimes even to the better.


Interesting; i wonder why more people don't get into Linux hacking.


Am I the only one that finds it amazing that in 2011 a mainstream Linux distribution like Fedora is still having issues playing YouTube videos? I mean... Linux on the desktop? wow...


Yes, you're probably the only one thinking that, because the idea that there is a problem with Youtube and Fedora is based in an inaccurate reading of this article.

He said Youtube worked fine. It was compared to as a control. The problem was with some other site using Flash.


Popular == Standard


an analogy to the web: Quirks mode is a good thing


"Standards are [just] paper."

He misunderstands how the world works. Standard is not just paper; standard is the thing. From this site: http://science1.wordpress.com/2007/04/26/matter-does-not-exi...

    For physicists Newton’s atomic materialism is blinding. 
    It will take many generations of humans to perceive that 
    standard is the thing. There is no “physical” world  
    other than standards and density differentials.


Technical Aggression.

I prefer more people standing against intransient word & law. (Much is called Standards, or Rights.)

Codes hold you.

Our support systems could allow critical viewpoints better.


Now we know Torvalds is regular.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: