Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Making something closed source does not make your product more secure, it only makes it harder to look at. Determined people will still try to understand how your software works in order to accomplish their goals.


Security through obscurity is a valid and effective tactic -- it's simply ineffective on it's own.


To reinforce your point, see all pre-modern crypto techniques. It cannot be argued that they worked, and they were all certainly security through obscurity.


Aren't most examples things where it didn't work? The most famous case is the German "Engima" device from WWII (hardware- and 'software'-based, but cracked and readable for years before the Germans knew because they believed it was both obscure and effective) but it's wholly possible that most schemes were broken eventually. Keeping an obscure system secret is really hard, especially against a motivated attacker.


Enigma wasn't hard through obscurity. The Allies had the Enigma machine long before they were able to crack it. It was hard because with the equipment of the day, it was pretty much unbreakable in the same way that prime-number based cryptography is today. It was only A. Turing developing a completely novel kind of machine (https://en.wikipedia.org/wiki/Bombe) that enabled the decryption. In the same way that quantum computers could break the current cryptography easily. It's not obscurity, it's assuming that some (mathematical) task is hard.


Don't forget about the Polish. They too broke the encryption before, but then they were invaded, and no precision machinery was available to increase the number of rotors to 10. https://en.m.wikipedia.org/wiki/Cryptanalysis_of_the_Enigma Turing did it too, independently.


Didn't know about that! But it seems they were able to break the system only while the Germans where sending the settings of the plugboard in the header of each message. Once that was changed in the early 1940, their decrypting techniques wouldn't work anymore.

Btw, from the wikipedia article: "lazy cipher clerks often chose starting positions such as "AAA", "BBB", or "CCC"" Weak passwords were an issue already back then.


I went to Bletchley Park a couple of years ago. It's a very fascinating place. I remember hearing stories of code breakers who could infer that a piece of plaintext was all JJJJJJJJJJJ simply because, upon looking at the ciophertext, it contained no J (relying on the fact that no letter would ever encrypt to itself in Engima, because of the reflector). Indeed the Poles don't get enough credit for their contributions. And yeah, virtually all encryption was similar to Engima back then: the Allies too had a similar machine. I believe traitors sold secrets or Engimas were captures on U-boats and so on, so security through obscurity wasn't really a thing back then either.


From what I know Turing didn't do it independently: the Polish sent their work to England about two months before being invaded, what Turing did is improve on their work so it could scale (the Germans added more rotors so the Polish decrypting machine wasn't helpful anymore).



I would consider the Enigma to be a very good counterexample to security by obscurity. Even after capturing a few of the apparatuses, it took a lot of mathematicians and engineers a lot of time and effort to build something that could decipher messages before the key became obsolete.


Enigma security didn't rely on security through obscurity. Having the machine didn't enable the allies to decrypt the messages. It relied on the secret of the... secret keys and the monthly key books.

It's also quite interesting to see that the Polish cryptanalists were able to reproduce the enigma machine used by the german army without even having seen one. They were able to deduce the number of rotors, the wiring, etc.

What in the end doomed the Enigma was the fact it was more a kitchen recipe than cryptography based on solid principles. It was a smart recipe for the time, but it had flaws (like the fact a letter could not be the same letter once encrypted). In some regards, most of our symetric encryption algorithms today feel a bit that way (with a lot more external scrutiny from experts however).

Even in WWII, I don't think that security through obscurity was considered as an absolute barrier. It's more in line with a "defense in depth" pattern. It gives a little more work to your adversary as he now has to figure out how your encryption works before breaking it, but it's not expected to last for long.


The Enigma was sort of on the cusp of a modern crypto technique IMO, not to say I know that much about it. I was more referring to other techniques like wrapping a message around a dowel or the Code Talkers from WW2.


Yes, the Zimmermann Telegram is a perfect example of security through obscurity.


This is not really a useful response.

The trivial counterexample is that all modern crypto techniques rely on keeping a key, or part of a key, secret. That's security through obscurity, and you've just stated bluntly that obscurity never works under any circumstances, right?

What you want to do instead is talk about tradeoffs. Talk about how much information you need to keep secret in exchange for a given window of effectiveness, and state a preference for systems which provide longer windows of effectiveness while requiring less information (such as only a key, or part of a key, instead of a key and an algorithm) to be kept secret.

Also, take care with your argument about "pre-modern crypto techniques". Some of them remained effective for centuries after being invented, which is a far cry from your "cannot be argued that they worked", and not necessarily a favorable comparison with many modern techniques, which are lucky if they make it a couple decades before being broken.

(also, of course, all cryptographic systems eventually get broken, which is why every so often we switch to new algorithms, longer keys, etc., and you seem to be arguing that any system which eventually gets broken is a system which never worked, and that's also wrong)


We don't allow you to change the definition of "security through obscurity" just like that!

Using a public algorithm with secret key is BY DEFINITION _not_ security through obscurity. On the contrary.


In context it was fair because I was responding to a situation that was already playing with the definition, and once you allow that you have to allow taking it all the way.

Unfortunately, I started my reply to the wrong comment and didn't notice until after I'd posted it and it was too late to edit/delete.

tl;dr too many people have a knee-jerk "security through obscurity!" reflex action to things they don't like, and I have a reflex action of yelling at them about it, which sometimes misfires when I don't take care to reply at the right point in the thread.


Agreed. Kerchoff's principle isn't really up for debate.


Which reminds me of how this site was hacked:

https://news.ycombinator.com/item?id=639976


Hashing passwords is security through obscurity by that reasoning. That does not make them less of a security function.

Just something to keep in mind.


"Security by obscurity" tries to keep the way that your encryption method works obscure, it does not try to keep a specific key obscure.

For example, if your way to encrypt works like this:

1) Shift all letters along by 5.

2) Cut out every second word and put them behind the message in order.

3) Whenever there's an f, s or y in a word, double up that word and shift the second word's letters by 7.

Then if your enemy figures out how your method works, you have to come up with a completely different method.

The opposite to security by obscurity would instead once come up with a method that entirely depends on a key. You can then publicize that method (or not), and if your enemy finds out your key, you just choose a new key and you're fine again.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: