This critique doesn't really convince me. It mostly says that the original article is wrong, but is mostly silent on why.
But the claim that forward secrecy is unnecessary because STARTTLS protects mail in transist is downright odd. First, it seems self-defeating: if STARTTLS protects me from adverseries, why use PGP? But of course, it doesn't. I can't know if my mail is protected by STARTTLS in advance, and I can't know if one of the mail providers involved is cooperating with the government.
Especially someone who fears being targeted by a state-level actor should assume that every mail provider is compromised. Really, that is the reason to encrypt mails in the first place, so when someone criticizes PGP's effectiveness for that purpose you really can't fall back to trusting mail providers.
STARTTLS is of little help. It's client to server encryption. If say, alice@ispa.com emails bob@ispb.ru, then the transit between Alice's computer and IspA, IspA and IspB, and IspB and Bob is protected. But only during transit. Both IspA and IspB get a completely cleartext email in their servers, where there well could be monitoring equipment or software installed.
If you're concerned about a family member watching the traffic going through your router, this fixes your problem. But if you're concerned about governments, it absolutely does not, because they can easily convince an ISP to install interception infrastructure inside their datacenter, where the mail is in clear text.
PGP on the other hand is end-to-end, Alice to Bob, and the data doesn't exist as cleartext anywhere else.
The first, and more minor one, is that effective deployment of keys for encryption/signing tends to require cooperation of the email provider anyways, giving them the ability to report trusted keys for your identity that they can use to MITM you. Indeed, a few of the suggestions for key distribution contained in threads on this post have exactly this flaw (I leave it an exercise to the reader to find which ones).
More importantly, even PGP doesn't fully protect information: the headers are still sent in full plaintext. This means that your email provider still knows whom you talked to, when you did so, and the subject of your message. I suspect that most of the government's interest in your personal email isn't in the contents of the email, but precisely this metadata that PGP makes no effort to hide whatsoever. If you're a dissident in Venezuela who's worried about the government finding out you're talking to foreign reporters... well, PGP won't help you.
This ends up being the conundrum of "secure" email: it's sufficient for the job (assuming that the UX is more well-thought out than it often is) only if you want to hide what you're saying but not whom you're saying it to. And that is not sufficient for what most people who want secure messaging desire.
Do other message protocols / apps solve this meta-data problem as well?
As I understand it a mobile to mobile protocol will need to send a message between app on phone on cell tower, through ISP, all of which presumably are open to similar government persuasion.
I know its not as egregious as email headers, and email is distributed but the basics seem similar.
Yes, this is a problem Signal was designed to address.
It's interesting to note that many other secure messengers don't handle this much better than email does; they store serverside contact lists, so when you sign into your account your buddy list is immediately populated. Which means they're storing a graph of who talks to who, in a centralized location.
>But the claim that forward secrecy is unnecessary because STARTTLS protects mail in transist is downright odd.
I don't see that connection in the article. There are two independent claims made:
1. That forward secrecy is not very likely to be valuable because people tend to keep their old messages around.
2. That SMTP STARTTLS protects things on the wire making it so that world governments no longer get to record from the network. The linked page also suggests that this would prevent the selective recording of encrypted messages.
It is a huge problem that people keep old messages around. It's a strong argument against using any form of encrypted email, for example; email's UX practically demands you keep searchable archives.
Far more people have been compromised by stolen message archives than by messages intercepted in flight; like, I think it's not even close, and that stolen archives are the expected M.O. of state actors.
That PGP (and its community) lack cryptographic features to support disappearing messages because they've essentially given up on it as a feature is damning.
I think this critique is pretty funny, in that it tends to confirm my complaints about PGP while pretending it's doing something else. A particular favorite part for me is when they point out that their local PGP installation doesn't include CAST5 --- but does include 3DES, the canonical insecure 64-bit-block cipher.
I would generally caution people against taking cryptographic advice from people who write things like "the author does not understand why authenticated encryption is not used or even required for the things that PGP is normally used for." As a general (and often surprising) rule, it tends to turn out that you need authenticity in order to even obtain confidentiality. There's a whole "doom principle" about this.
There are some good points in there, and, while I don't want to provide a definitive rebuttal, I think it is worth mentioning how close PGP is to something that can be recommended. Let me respond to the issues one by one:
> If messages can be sent in plaintext, they will be sent in plaintext.
This is actually perhaps the strongest argument. If I'm steelmanning it correctly, it says that any system which tries to be backwards compatible with an insecure protocol is likely to result in some messages being unencrypted when they're not supposed to, either by accident, or due to an attack.
It's true that by introducing an entirely new, backwards incompatible protocol, this entire class of problems can be almost eliminated. There is still, of course, the problem that someone copy-pastes text from a secure context to an insecure context, but I agree that there is a meaningful reduction of risk being offered here.
However, it is worth comparing this situation to the web ecosystem. The fact is, browsers still support both HTTP and HTTPS, and a lot of work has been done on protocols, and implementations, and UX to stop downgrade attacks from happening. To avoid this problem, we could have defined HTTP to be the legacy web, and come up with an entirely new protocol (perhaps one where a benevolent dictator provides the only accepted client and server implementations), but I'm glad we didn't go down that route.
> Metadata is as important as content, and email leaks it.
While the most popular email encryption tool doesn't encrypt subject lines, Delta Chat, for example[0], does. That's therefore not an argument against encrypted email as it against using clients that aren't as good as Delta Chat. However, your point about backwards compatibility is still relevant here.
I suppose the analogous scenario is that with HTTPS we know whether the connection to the server is encrypted by looking at the URL (or the padlock, or other clues that users don't really follow). However, until quite recently there was still the issue that a request over HTTPS could pull in sub-resources that weren't encrypted, which feels somewhat equivalent to someone accessing their Delta Chat inbox in a separate client that doesn't use PGP.
As for other metadata, it would be possible for email providers to implement a type of anonymous remailer, where the sending server hides the sender address, and the recipient server decrypts an outer layer of encryption to find the intended recipient, but I admit that no providers currently offer such a service. If it were implemented, though, I think it could offer better security than centralised services, because the metadata could be split across different jurisdictions, removing the single point of failure.
> Every archived message will eventually leak.
This seems to be an argument that if you store sensitive documents on your computer, then the security of those documents is only as good as your at-rest encryption (and your ability to stop someone separating you from your computer when it is unlocked). You accept that recipients can configure their email clients to delete old messages stored on their computer, but then you seem to require that senders can delete their old messages on other people's computers, which doesn't seem like something any protocol can enforce.
(You also make a point here about how JavaScript sent from a server can't be made secure, because a server can send a different script next time you visit. That's true, unless the initial page of the web app is stored as a bookmarklet[1], or secured by a hashlink[2], which browsers currently don't support).
> Every long term secret will eventually leak.
Yes, it would be nice to have Perfect Forward Secrecy supported by PGP encrypted email. There has at least been some effort to advance that at the IETF in the past[3], and it's also being considered as part of the future development of Autocrypt.[4]
For more than a decade, the web security field was plagued by vulnerable UX that would, in a bunch of different ways, shift users off HTTPS sites and on to HTTP sites. It was a real problem, only recently and gradually mitigated by HSTS.
Two distinctions to draw between the HSTS situation and the encrypted email situation:
1. There is no proposed "HSTS" for email, let alone any coherent plan by which it could be rolled out; HSTS worked because a very small number of browsers needed only collaborate with a relatively small number of server operators to make it work; the dynamics of email are totally different.
2. In the web threat model, the consequences of shifting a user from HTTPS to HTTP are not automatically fatal, and affect only that user. In the Inevitable Encrypted Reply, whole threads involving multiple counterparties are disclosed. And in the PGP threat model, you have to assume the adversary is recording everything, all the time. Once the I.E.R. happens, you and all your friends are dead.
Thank you for fleshing out the comparison between HSTS and encrypted email. I just have a couple of points to add.
1. For the avoidance of doubt, there is a proposed "HSTS for email", but it doesn't provide end-to-end encryption (so it's not particularly relevant here). The technology I'm talking about is MTA-STS,[0] as described in RFC 8461, which you probably already know about, but it might be new to other people.
I'm not sure how different the dynamics are between the web and email, though. Surely there are more operators of web servers than email servers? Also, over 90% of the email client market is attributable to Google, Microsoft, Apple, and Yahoo! collectively.[1] That doesn't seem very different to the situation with browsers.
2. I think it's worth highlighting that if a user is shifted from HTTPS to HTTP, that may mean that their password (or session cookie) is being sent unencrypted to/through an attacker. Once the attacker has this, they can access all the details of a user's account, which, in the case of webmail or a forum, includes all previous messages and grants them the ability to send new messages as that user.
That seems like a worse failure mode than just losing the privacy of a single message, and I think people are just as likely to die if you leak information about them over insecure email as over insecure HTTP. In both cases the threat model would assume that the adversary is recording everything, all the time.
Right, so, in the world of secure messaging, there's nothing in the world less trustworthy than a public Internet email server, and MTA-STS doesn't change that; it makes dragnet surveillance of email harder (and does something else you & I will disagree about) and is a good thing, but doesn't change the comparison here at all. What is needed is a mechanism that ensures that a secure exchange between two people never suddenly becomes plaintext email exposed to any server.
With respect to the comparative failure modes: a single accidental plaintext reply can expose dozens of participants. If we're not LARPing, if we're seriously talking about protecting people organizing against state-level security services, we have to confront the sheer blast radius of that mistake, which happens all the time even to people who routinely use PGP'd email.
Further: if the premise is that our adversaries have state-level resources, we have to assume they're recording all the time. You can't accidentally send plaintext ever. There's no takebacks if you do. It's irrevocable. If your email group sees a plaintext quote-reply, you have to grab your go-bag, thermite your hard drive, and make your way to the border, right now.
Once again: this is a mistake that people who well understand encrypted email make all the time.
It is irresponsible to encourage at-risk people to use PGP-encrypted email. It's malpractice.
I think I'd agree with most of what you said there. I would only reiterate that the expected blast radius of leaking a single plaintext email is likely to be less than that of leaking a password/session over HTTP (assuming the user is in possession of comparably sensitive information in the two scenarios).
Of course, on the modern web, the latter failure is much less likely to happen, and I accept your point that PGP encrypted email is uniquely burdened by the risks of backwards incompatibility, at least in the world of secure messaging services. It may not be a perfect comparison for me to bring up HTTPS, but I think that the incremental nature of that technological upgrade provides some helpful perspective.
Also, I appreciate your restraint about what I assume is the other subject that would be a source of disagreement. There's no need to complicate matters by introducing that extra topic, and I apologise if mentioning MTA-STS was unhelpful for that reason.
>A particular favorite part for me is when they point out that their local PGP installation doesn't include CAST5 --- but does include 3DES, the canonical insecure 64-bit-block cipher.
All OpenPGP installations require 3DES as the ultimate fallback. It is hard coded into the standard (RFC-4880).
This comment seems to come from a fundamental misunderstanding of how OpenPGP works. The cryptographic preferences are signed by the certification key along with everything else in the OpenPGP identity (AKA public key). There is no possibility of any sort of downgrade attack. You would need to break the signature. This is one of the ways that offline encryption like OpenPGP is superior to online encryption like TLS.
So there is no downside to supporting old algorithms. Not that 3DES is inherently broken anyway if you are willing to keep your messages below a GB or so.
Except to say, I just think it's funny that the defense to "PGP relies in part on CAST5" was to say "no, it relies on a somewhat less secure cipher".
We tolerate design details like this in PGP because we're LARPing, and care more about the software community of PGP enthusiasts than we care sincerely about at-risk people who might make the mistake of trusting it. I'm sure it would be hard to modernize PGP and force everyone onto modern AEADs and key agreement schemes and packet formats. I also think that demanding everyone's phone numbers made life a lot harder for Signal than it could have been. But Signal cared more about security than about nerd adoption, and PGP makes the opposite trade.
Over at payments in Google, we still use PGP for communicating with the financial world (JWE supported too)[0]. The one advantage of PGP is that it's been around so long, many companies already have it set up in their systems when communicating with them. Be it files or application level communications, support is already there.
PGP definitely has it's rough edges. Tools with simplified APIs like Tink[1] seems to be the way security researches want to go, as PGP definitely makes it easy to do things wrong (just due to how much legacy support it has out there).
I'm curious about the communication with the financial world you mention. Could you elaborate what kind of interface or API you're talking about, or at least which area of finance is involved?
I really should say payments world. Banks use PGP regularly for file transfer (bank statements and the like). Payment processors use PGP for settlement statement transferring.
The page I linked uses PGP for application layer encryption. If you poke around the reference page[0], you see a bunch of different specs that 3rd party platforms can build against and be hooked up into Google as a form-of-payment (ie: used by people to pay for Ads, Play, Cloud, etc...). We encrypt each message passed between Google and a payment process with PGP. These are just normal server-to-server calls over HTTPS with JSON payloads (with the json payload being PGP encrypted).
The ergonomics of PGP are awful. The author's appreciation of Signal is valid, except for one caveat: what if your phone has malware? That's all your fancy Signal crypto out the window. It's the usual argument of: `A chain is only as strong as its weakest link`.
My point is: there are other factors to consider besides advice like: `Just download Signal and you're golden`.
Edit: I'm referring to people casually saying: `Download Signal and you'll be fine` and not pointing out other OPSEC practices we all need to follow, regardless of threat model. (Not clicking on malicious links sent via SMS etc).
Can you describe a system that is effective when your phone has malware?
I have a little experience with this, designing systems that use TPMs and remote attestation to shut down communication rather than allow forgery or leaks. It’s very delicate to begin with, even if one entity owns all the devices, and unsolved for the general case.
> Can you describe a system that is effective when your phone has malware?
I'm referring to people casually saying: `Download Signal and you'll be fine` and not pointing out other OPSEC practices we all need to follow, regardless of threat model. (Not clicking on malicious links sent via SMS etc).
other OPSEC practices we all need to follow, regardless of threat model.
That doesn't really add up - a goals of things like more secure phones and secure general purpose messengers is explicitly to provide a high level of security without following a bunch of OPSEC practices. You aren't going to get mass adoption of more secure systems by insisting everyone take up OPSEC practices 'regardless of threat model'. The aim is to reach that adoption through systems that offer a lot of security without a lot of security-related ceremony. It's sort of inherent in the notion of mass adoption.
> you could still have a key logger or malicious kernel modules
I'm referring to people casually saying: `Download Signal and you'll be fine` and not pointing out other OPSEC practices we all need to follow, regardless of threat model. (Not clicking on malicious links sent via SMS etc).
I had to deep dive into the details of PGP in order to setup a trusted chain of artifacts in our internal CI system, all the way to published artifacts on Maven Central (the Java defacto main package repository)... it was a tough ride, as PGP is not very simple for end users, even with all the tooing they've created.
The GNU PGP utility[1] is pretty easy to use on the terminal, and the UI version[2] actually makes it even easier to sign/encrypt/verify stuff and manage your local keys (also to integrate with email clients, which seems to be the most common use of PGP)...
The problem is just that understanding exactly what a key is for, why I need a sub-key, how to move keys between CI agents securely, how to rotate keys, when to use a keyserver, where to store my master key, how to trust someone else's keys (or sub-keys?!) and so on... is really difficult and there's no one simple answer.
Howver, I'm afraid these problems are independent of PGP itself: you'll have these problems whatever system you use, as the "key" (pun intended) issue is not the technology you use, but the public-key intfrastructure problem[3].
In summary: how do you trust someone else's public key without phisically having access to it directly from them? If you get the key online, wow can you know that the source is definitely reliable and the channel of communication has not been compromised? On the Web, Certificate Authorities (CA) play the role of vouching for someone's keys through certificate signing. But this is not an appropriate model for things like personal keys, and it introduces its own issues as well[4].
In PGP, the solution seems to be Key Signing Parties[5]! Sounds fun, but also completely unfeasible on the scale of the web.
I don't know how Signal and other new solutions fair compared to PGP on these problems, but would be happy to know if anyone has any advice.
i don't think the web of trust as a "first principle" is necessarily a good thing. it made sense a while ago perhaps, but nowadays it's just overkill. there are several use cases where either you want to keep a closed loop on everything (e.g. you intend keys to circulate only within your company), or very very different cases where there is value in keys that are valid only for the sake of a chat: it doesn't need to be anything illegal, but if two parties negotiate a message exchange and there is trust in the fact that the very first key exchange is secure, you only need the subsequent messages to match the first key pair. after that, the keys can safely be destroyed, not added to your respective keyrings and signed on the public internet.
I'm not following. How does one build the trust in the first key exchange in a way that isn't building a web of trust?
And your first example is just a small web of trust for the closed loop.
That is, you are still leaning heavily on it as a first class thing. If the argument is that we don't need a global web of trust that everyone is always a party of, I can agree with that.
i think once most PGP implementations rely on public key servers + key signing a "global web of trust" is a first principle, while the "small web of trust" really isn't
Apologies on missing this. I agree that "global web of trust" is a bit tough. And I agree that reliance on far reaching trust is essentially that. The idea, though, seems to still hinge on the same concepts, whether writ large or small.
I don't have a general solution, but the following works well for signed commits at my workplace (disclaimer - all comments and thoughts my own):
* IT, authority of all secrets, owns the primary keys for all employees.
* Each employee has a code signing subkey generated which is signed as trusted by an org-wide key.
* The employee is given the private part of their code signing subkey.
* The public parts of the org-wide key and employee code signing keys are made available to everyone via a keyserver.
Now all commits can be forced to be signed, can be validated by checking that the org-wide key trusts the key, and be checked over time by using the keyserver. IT, being responsible for corporate security, manages the secret bits without involving engineering. Keys can be rotated on demand, when laptops are lost, and revoked when employees leave. It also resolves all the issues with each person managing their own primary key and dealing with the web of trust problems associated with that.
> In summary: how do you trust someone else's public key without phisically having access to it directly from them?
You obtain the key's fingerprint in a secure manner from them. They tell you directly, as in face-to-face, give you a business card where they printed the fingerprint, etc.
This is not a problem that's solvable automatically. Depending on your scenario, you may be more or less paranoid. Eg, you might sign your friend's key with minimum fuss, because you've known him since you were 12, and that's that. If it's a facility dealing with nuclear weapons then you probably ask for multiple forms of ID, a background check, and interview other people to be really sure this person is the right one to trust.
Once that is done, you sign their key with your. This can be uploaded to a keyserver. After that you don't need physical presence, because you trust your key, and that certifies what you trust unambigiously.
> If you get the key online, wow can you know that the source is definitely reliable and the channel of communication has not been compromised?
You sign their key yourself, or rely on somebody else's signature.
In a company, you'd have say, the sysadmin in charge of that.
* Alice is the sysadmin at Acme.
* Bob gets hired
* Bob is given a list of procedures to follow at the company, which include generating a key on his laptop, noting down the fingerprint somewhere (by hand, or printing it and verifying that just in case), and bringing it to Alice.
* Alice is trusting that the company's HR department really hired Bob Smith and not some impostor, and that somebody checked his ID. Alice then verifies the fingerprint to make sure she really has Bob's correct key.
* Alice then signs Bob's key with Acme's key. Bob in turn signs Acme's company key with the same process (obtaining the fingerprint from Alice).
* At that point, Carol can trust Bob's key because she went through the same process, trusts Acme's key, so can establish the chain of trust of Carol -> Acme -> Bob.
Zoom bought Keybase as what appeared to be an acquihire, so a lot of people assume the Keybase product itself is on ice (if not outright dead), and lot of distrust from early Zoom mistakes (their macOS uninstaller leaving behind an exploitable root daemon; the geopolitical distrust of their engineering teams in China and significant VC funding from Chinese sources; etc) leaves a lot of people concerned about the security of using Keybase moving forward under Zoom ownership.
My personal worries about keybase are more about the coin-bs and unwanted features creeping in. There's no way keybase can be a contender to CAs if it has chat and file sync, it just needed to be a barebones toolkit that can be integrated and that can eventually get specified. But that wouldn't get you millions.
My personal worries were always more about Keybase's business model. I respected the feature creep hustle. It's why the comparisons in the article here and the article it is a critique of use OpenPGP versus Signal as the comparatives. "Barebones toolkit" doesn't work, as we've never seen high, general user adoption of OpenPGP (or GPG or whatever), but Signal as a "messaging app first" has refined the PKI on-boarding enough that even some of my least technical friends can and do have Signal accounts. The "messaging app" is the front door that made sense to them. A regular user isn't going to think "I need to encrypt some stuff, so I'll pick up a toolkit like OpenPGP and then figure out how to use it, and then eventually figure out how to integrate it into my email or SMS client", a user is going to first want to "send a message to someone I know" and if it's secure that a beautiful bonus, but if it's convenient is general the bottom line. (Most users are not looking for Ikea/Lego solutions, they just want to go to the big box store and grab it off the shelf.)
From that perspective, all of Keybase's features made a certain amount of sense. Maybe they had a few too many "front doors" (file sharing, [group] chat, git repo sharing, etc), but that's potentially more ways to get new users, more ways to explain to users why they might want good PKI in their lives without first explaining what PKI is or how it works.
(Even the cryptocurrency BS has kind of sense to it from that perspective. Cryptocurrency wallets have been a lot of users' first introduction to PKI and how easy it is get wrong [all the variations on "I lost X Bitcoin because I didn't know the silly numbers mattered"]. With so many untrained users [badly] experiencing PKI for the first time with cryptocurrency, it does make a kind of sense for a PKI tool to try to show how to do it right/safely/user-friendly and integrate it with other PKI functions. [Regardless of whether or not you think encouraging general use of cryptocurrencies is a good idea in the first place.])
I just wish Keybase had gone into it with a business model that was tightly user-focused, strongly revenue generating, and less likely to get them acquihired into an entirely unrelated parent company.
We have a lot of different repositories and internal as well as open source libraries we maintain. We publish things to our internal Nexus when it's internal, and Maven Central when it's public.
Because our product is used in high security scenarios, we needed a way to verify that every artifact we download (from the repos mentioned above) during the CI build is signed to guarantee that if our repositories have been compromised, we won't end up with compromised artifacts being brough into our product anyway (because the signing keys are kept securely in the CI agents, which we trust much more than the repositories as, for example, they are not exposed on the Internet).
So we needed signing keys in the CI agents, and we needed verification public keys (that we trust) in the releaser "machine" (which downloads everything and puts it together for the final build). We also needed the releaser machine to re-sign everything for publication on Maven Central and our distribution website, which our customers use to download it from and can also verify our signature on every artifact.
You're right that a sort of CA could help, but as I understand it, with PGP you don't need that in our case because all keys are controlled by us, so it's easy for us to trust keys and generate new keys as we spawn new agents.
This means our CI master has to have the master key, otherwise it couldn't sign other agent's keys... which is a bit of a problem but we think it's still fine because our master CI is well protected (and if that gets compromised, everything else falls anyway).
It's been working well, but it was complicated to setup and it's a bit inconvenient to use when we do a full product release (it involves updating all keys locally as they're generated for CIs as I mentioned)...
i think the main problem with PGP has to do with its general usability, and often wrappers, GUIs etc. are mere workarounds to its UX flaws. like the article says, it's not much more than a swiss army knife and for many of the uses that most people mention here there are often better tools – end-to-end secure messaging, file encryption and so on.
i think pgp is a reasonable tool in absolute zero-trust circumstances, where one has to send and receive messages and files through channels that are at constant risk of tapping or phishing. even there, i'd say it's "very annoying to use" but it's probably better than death, or jail.
I tried to set up PGP on a yubikey and use that for SSH encryption, email signing, and other things. In the end PGP made is so uncomfortable that I just stopped. PGP may be a secure piece of software (I don't know enough about this to have an opinion), but it's not a usable piece of software.
you are conflating PGP with PGP/MIME. The later is using PGP to encrypt mail. You can encrypt other things with PGP as well.
But i kind of agree with your point: a lot of the "original rant" the article answers to seems to be about its author preferring to use different tools over PGP, one of the relevant use case comparisons being communication where they put Signal vs PGP/MIME. Such a rant would not exist if PGP/MIME would not be a thing. There are other use cases mentioned in the rant, but this one seems to be the primary one.
However its not that "pgp rides on email" it is that "email encryption rides on pgp". Yes, for the use case of communication email encryption has pitfalls that encrypted instant messengers like signal do not have, and at the moment the later is the better choice. But Signal has other pitfalls, like piggybacking on phone numbers or its saas architecture with a single commercially controlled point of failure. Email is so damn robust, we could not kill it even if we wanted to.
I honestly do not get this whole "don't sign or encrypt email - use another app" idea. We live in a world where banks send five digit one time pins using unencrypted SMS and where Microsoft signs their security newsletter with PGP. The "if i don't see the need it should not exist" approach is egocentric, arrogant and naive. E-Mail exists, and it isn't going away anytime soon, so a cryptosystem for email is a necessity.
Some of the "answers" are good rhetoric but bad technique. If you want to be reassured that the original rant is wrong, they're successful, and this is a prize-worthy debating tactic, but it's not useful for getting to the facts.
The "answers" around MDC in particular suffer from this, they harp about how it is mentioned several times (it's a fatal problem, I think I'd mention it more than once) and you might get the impression that this means it's an invalid criticism, but notice that they carefully don't ever say so, because of course it is a fatal problem in the real world.
If you dig further into the site their apology for MDC basically goes like this: If PGP had AE instead, we'd have the exact same UX and so it doesn't fix anything. Thus the problem is bad client software (weird, all the PGP software is bad, that doesn't seem like a coincidence). In fact the whole point of AEAD is that you can't build that UX.
The problem is that the existing MDC handling gives back bogus plaintext and then just casually says "Also, the MDC was invalid" as if that wasn't the headline news. It behaves this way partly because MDC is optional, but also because PGP's developers don't understand that wrong is worse than failed. AE APIs just will not give you any plaintext at all when authentication fails, there is no authenticated plaintext.
The result is that in PGP you can alter Bob's message "Police on high alert, act casual, wait for my next message" to read "Bring the whole shipment to the park at midnight tonight" and yes, the MDC fails and maybe Alice's client tells her there was a problem, but the message "Bring the whole shipment to the park at midnight" is still right there for Alice to act on anyway, and so guess what Alice is going to do...
Cryptography has known how to reliably authenticate ciphertext for decades, back to and before Bellare and Namprempre. PGP builds on none of that knowledge; instead, they came up with their own goofy, broken construction.
The theme of my engineering complaints about PGP is: we would never tolerate these design shortcomings from a new cryptographic tool; any of them would simply be disqualifying. We tolerate them in PGP because we've bucketed it as a legacy system with a huge installed base of users. But in realistic terms, PGP has only a tiny fraction of the user base of any reasonably well known secure messenger. There's no reason to put up with any of this.
Your example seems overly dramatic to the point of being completely unbelievable. Can you provide some reference links or other further reading? How does an attacker change the complete text in a meaningful without knowing Bobs private key?
Essentially, all modern symmetric encryption works by XORing something an attacker can't know with the plaintext to form a ciphertext. The recipient does the reverse operation to turn ciphertext back into plaintext.
So it's actually very simple to change what it says, without knowing the key, if you know what it says now. For example, XOR the old known message with your replacement to get a stream A, then you XOR the current ciphertext with A to get a new ciphertext, which will be decrypted to give your replacement message.
[Edited: I re-read this and I've oversold it substantially. In practice you won't have this easy a time with a non-toy system. But some of the cipher modes PGP still routinely uses will make it almost this easy if the attacker knows what they're doing and has the entire message text]
What a modern system does here is it uses an "Authenticated Encryption" mode where the ciphertext is slightly larger such that decrypting also ends up calculating a "tag" value that should match one carried in the message. If the message was altered that tag won't match, and crucially there just is no plaintext in this case. If you have a halfway modern web browser, the secured connection to Hacker News does exactly this, so an on-path attacker can't change anything going back and forth, the most they can do is blow up your connection.
But nobody was doing that when PGP was originally invented. So at first what PGP does is it relies on digital signatures over the entire message. After processing the entire message apparently from Bob, PGP sees that it has a signature, and this signature fails - this isn't the message Bob signed.
Bad guys (who know the message but not the keys) can't prevent that. But it turns out that in practice they needn't care. The (bogus) decrypted plaintext is shown to Alice first, and Alice will have likely seen messages with "failed" signatures before, and she knows it's probably some stupid mail server configuration issue or something else unimportant. This message from Bob decrypted as expected, so what's the problem?
Now you know what the problem is, I explained it above, but Alice almost certainly doesn't, as you didn't when you replied. It seems crazy that an "encrypted" message could be altered so easily, who would design it that way?
More modern versions of (Open)PGP introduce this thing called MDC, which makes it at least possible to think about whether this tampering has happened to the raw encryption, without signatures, however for ages the default behaviour was still to just warn the user that there's a problem after showing them the bogus plaintext. In current versions this isn't what will happen with raw GnuPG itself out of the box, except there are still lots of caveats (e.g. Unix pipes defeat this) and so in practical use you often end up back with the user being shown the bogus plaintext anyway.
> However its not that "pgp rides on email" it is that "email encryption rides on pgp"
Doesn't it go both ways though? The web-of-trust seems email focused, keyservers seem email focused. 'gpg --gen-key' straight up asks you to encode your email into the metadata. I guess bootstrapping on an existing network is always a pain point.
What I was trying to highlight were the issues faced by 'partial adoption' of encryption, namely:
A) It being cumbersome in the MUA and therefore likely to be ignored/disabled/complained about
B) The myriad of ways for an OPSEC failure, or in common parlance, clicking the wrong button.
If PGPMAIL or somesuch had eclipsed plain-old email, as Signal/WhatsApp have eclipsed SMS, then these issues could be resolved at the platform level.
I use PGP outside of email all the time (you should too). You can PGP encrypt sensitive files (passwords, finances, etc.) to your PGP key and the PGP keys of your family members and upload them to shared Google and MS O365 folders.
For important, sensitive data that you need to access and keep confidential, PGP is great. Nothing else really comes close (despite all the criticisms).
But the claim that forward secrecy is unnecessary because STARTTLS protects mail in transist is downright odd. First, it seems self-defeating: if STARTTLS protects me from adverseries, why use PGP? But of course, it doesn't. I can't know if my mail is protected by STARTTLS in advance, and I can't know if one of the mail providers involved is cooperating with the government.
Especially someone who fears being targeted by a state-level actor should assume that every mail provider is compromised. Really, that is the reason to encrypt mails in the first place, so when someone criticizes PGP's effectiveness for that purpose you really can't fall back to trusting mail providers.