>They propose a modification to the Bitcoin protocol such that it can tolerate colluding groups that control up to 1/4 of the mining power – less than the previously assumed bound of 1/2 of Byzantine mining power which requires an honest mining majority that follows the prescribed protocol.
Selfish mining is tolerable but unideal. It is a bit misleading to claim that 25% block withholding attacks are as critical as 51% attacks.
>Users keep an account in the system, where the user’s account is identified by the user’s public key or address.
Promoting address reuse isn't a good idea.
>When a validator signs duplicitously, a short evidence transaction can be generated by anyone with the two conflicting commit-vote signatures
The assumption that an attacker will tell you he is attacking isn't a very strong one.
>A user can avoid long-range attacks by syncing their blockchain periodically within the bounds of the unbonding period.
This security assumption means means that no one can safely join the network and perform an initial sync. It isn't explained how they prevent long range attacks, but it can be assumed that they do it through disallowing reorgs.
Disallowing reorgs allows another consensus breaking attack. The entire purpose of reorgs is to maintain consensus, so if an attacker can create a blockchain of length bond-period, they can trick you so you don't achieve consensus with the "main" blockchain.
Another thing that is worrying about this whole idea is that 2/3 of validators must come to consensus on what block to use. If you choose a block and broadcast your signature, you cannot sign again. So how do the validators solve the Byzantine Generals Problem? How do they determine which block they will sign? They could go with the first block they see, but that leaves you open to sybil attackers running many nodes and sending validators different blocks in order to trick them into splitting their vote. These are the fundamental problems that consensus systems are meant to solve, consensus systems shouldn't have to have a 2nd consensus system to support itself.
>There are three types of votes: a prevote, a precommit and a commit
It seems the paper is attempting to solve the previously mentioned problem, however the most fundamental explanation for the problem with this is explained by Byzantine Generals Problem. The problem essentially involves you sending a message of "I'm ready to attack" back and forth, but because you cannot know they received the message, you cannot attack. If they reply with a "okay, we got the message" message, they don't know that you received the message, so they don't know whether to attack. Whether there is 1 back and forth, 3 as in a "prevote to attack", a "precommit to attack" and a "commit to attack", or 100 back and forths, you aren't achieving consensus, you're skirting the issue.
The next few sections explain the mechanism, but not how it actually solves Byzantine Generals Problem. with additonal assumptions based on things that require consensus including "If there are less than 1/3 in Byzantine voting power and at least one good validator decides on a block B, then no good validator will decide on any block other than B"
>Each round has a designated proposer chosen in round-robin fashion such that validators are chosen with frequency in proportion to their voting power
If the pseudo-random seed for the next round validators is information that can be decided by "previous" (or not so previous) validators, a validator can choose themselves. With a high number of validators, it may take a few thousand blocks for, say, a 3% stakeholder to randomly be selected for a sufficient number of votes to choose the next seed, but once they have ground[1] their first block and made themselves a majority of validators, they can repeat over and over and control the blockchain.
> Say that some good validators locked on block B and round R, and some good validators locked on block B0 at round R0, where R < R0. In this case the proof-of-lock from R0 included in a proposal by a good validator will eventually unlock those validators locked on R allowing the consensus process to continue.
This "solution" helps prevent 33% deadlocks, I agree, however now you can seed your stakegrinding attack with an incredibly small validator/voter %. Depending on the parameters of the system, you may only need to become a validator once to control the entire blockchain forever.
The system of voters has somewhat obscured it, but stake grinding[1] is still a major problem, in fact, it is probably the simplest and cheapest attack vector on the system defined in this paper.
In conclusion, this system has severe sybil problems, consensus problems, centralizations problems (in that one person could control the entire blockchain cheaply), structure problems (address reuse) and usability/security assumption problems (no one can safely sync with the network after the first bonding period is up?).
[1]Stake grinding is the process of changing data in a block to create a large number of new seeds (sometimes billions or trillions) until the seed designates you as the "winner", or in this case majority of validators.
I agree with you that the random number generator in tendermint isn't good enough. I am going to add one I have put in a different cryptocurrency before. Each validator gives the hash of a secret, and later gives their secret. All the secrets are XORed to find the random seed.
>I agree with you that the random number generator in tendermint isn't good enough. I am going to add one I have put in a different cryptocurrency before. Each validator gives the hash of a secret, and later gives their secret. All the secrets are XORed to find the random seed.
Tendermint doesn't have a random number generator at all. You cannot have consensus on random numbers, or else they wouldn't be random. It is a fundamental problem that you are determining the next blocks winner based on numbers that can be influenced by previous winners.
2 examples of multiparty games of choosing random numbers:
1)
You and I each write either "1" or "0" onto a piece of paper, and simultaneously reveal to each other.
If the sum is even, I win. If the sum is odd, you win.
This simple 2-person game randomly generates 1 bit. It can be extended to arbitrary numbers of people by replacing addition with XOR.
2)
N people each vote either "0" or "1". They reveal votes simultaneous. The minority participants are rewarded. The median of the votes is the next random bit.
Yeah, these work great out of consensus systems. Please consider that you're working on a consensus system though.
"Simultaneous" has no meaning. There are no means of instant communication, there is latency. There also is no proof of time without a consensus system.
So what is the proof that I committed my 0 or 1 value before they revealed? Well you could trust a central authority to maintain that timestamping, or you could even use Bitcoin for your timestamping. In fact, your PoS system could probably work if it piggy backed on Bitcoin completely.
If that's too abstract for you, consider that I say I'm ready to commit, everyone sends me their commits, I construct a block with all their commits and a ton of other possible commits and then once I have created a seed that allows me to win and control the network permenantly.
After reflecting more on the difficulties of a random number generator on a consensus system, we decided to use round robin method to choose the next leader from the validators.
Depends on the application and the purpose of the blockchain, but the paper isn't explicitly premoting address reuse.
> The assumption that an attacker will tell you he is attacking isn't a very strong one.
Users don't consider blocks committed until they've seen +2/3 of commit signatures, so a fork implies that the victim has evidence of double-signing.
> This security assumption means means that no one can safely join the network and perform an initial sync.
Not sure what you mean -- Maybe an analogy will clarify. Is this a problem that isn't present in Bitcoin? How can a brand new Bitcoin user securely join the network for the first time?
We don't prevent long-range attacks. We assume that they will happen, and the clients should be coded accordingly. For example, if my phone wallet hasn't synced with the network in a while (longer than the unbonding period) then it should warn me the next time I connect, and perhaps ask me to validate a fingerprint of the blockchain via outside channels.
I'm aware of how checkpoints prevent reorgs in other protocols like PeerCoin. That's not how Tendermint works.
> So how do the validators solve the Byzantine Generals Problem?
Perhaps a prerequisite to understanding the Tendermint paper is to first understand the DLS algorithm. It works in a similar but different way. In short it's a round-based 2-phase locking system (prevotes and precommits) with a 3rd phase of signing on top to commit. Without the round-based 2-phase locking system, the protocol would have the problem that you mentioned. But assuming that there are less than 1/3 of byzantine voting power, the validators will come to consensus before they start committing.
> stakegrinding attack with an incredibly small validator/voter %
Stakegrinding is less of an issue in Tendermint because of the deterministic round-robin algo for choosing the proposer, and the fact that the remaining validators need to commit. There are subtle attacks involving the timing-out of validators, but there are also workarounds. These attacks are probably not what you have in mind when you mention stakegrinding.
TLDR: Check out the DLR algorithm first, "Consensus in the Presence of Partial Synchrony (1988)", and then lets keep the discussion. Also check out forum.tendermint.com for some other Q&As.
>Depends on the application and the purpose of the blockchain, but the paper isn't explicitly premoting address reuse.
It defines a system in which addresses are reused.
>Users don't consider blocks committed until they've seen +2/3 of commit signatures, so a fork implies that the victim has evidence of double-signing.
No it doesn't, unless you have a severe misunderstanding of information theory and think everyone know all information everywhere (including attacking forks not published). Your security assumption shouldn't be that the attacker is dumb enough to announce that he's attacking and giving up his money by broadcasting forks.
>Not sure what you mean -- Maybe an analogy will clarify. Is this a problem that isn't present in Bitcoin? How can a brand new Bitcoin user securely join the network for the first time?
It worries me that you can't answer this and you're working on a cryptocurrency. They can achieve consensus because they can be supplied a proof by any of their peers of the transaction-output database. Not allowing reorgs allows a peer to give you a proof that prevents you from achieving consensus.
>For example, if my phone wallet hasn't synced with the network in a while (longer than the unbonding period) then it should warn me the next time I connect, and perhaps ask me to validate a fingerprint of the blockchain via outside channels.
The purpose of a blockchain is to achieve consensus without trusting anyone. If you trust that group, you probably would get along just as well with them managing a centralized database of transactions for you.
>I'm aware of how checkpoints prevent reorgs in other protocols like PeerCoin. That's not how Tendermint works.
I never mentioned checkpoints. The paper basically said it wouldn't allow large reorgs when it said it could prevent long range attacks and ignore the fork.
>Stakegrinding is less of an issue in Tendermint because of the deterministic round-robin algo for choosing the proposer, and the fact that the remaining validators need to commit.
The entire reason stake grinding is possible is because the selection of the next blocks winner is deterministic. The previous winners can change the seed over and over again until they win.
>and the fact that the remaining validators need to commit.
This is not relevant, you can wait until the end or ignore theri validations completely.
>Check out the DLR algorithm first, "Consensus in the Presence of Partial Synchrony (1988)"
This paper is fine, but it assumes 1/3 generals aren't traitors. Your system makes it trivial for a traitor to pretend to be all of the generals.
> The purpose of a blockchain is to achieve consensus without trusting anyone. If you trust that group, you probably would get along just as well with them managing a centralized database of transactions for you.
> This security assumption, the idea of “getting a block hash from a friend”, may seem unrigorous to many; Bitcoin developers often make the point that if the solution to long-range attacks is some alternative deciding mechanism X, then the security of the blockchain ultimately depends on X, and so the algorithm is in reality no more secure than using X directly – implying that most X, including our social-consensus-driven approach, are insecure.
> However, this logic ignores why consensus algorithms exist in the first place. Consensus is a social process, and human beings are fairly good at engaging in consensus on our own without any help from algorithms; perhaps the best example is the Rai stones, where a tribe in Yap essentially maintained a blockchain recording changes to the ownership of stones (used as a Bitcoin-like zero-intrinsic-value asset) as part of its collective memory. The reason why consensus algorithms are needed is, quite simply, because humans do not have infinite computational power, and prefer to rely on software agents to maintain consensus for us. Software agents are very smart, in the sense that they can maintain consensus on extremely large states with extremely complex rulesets with perfect precision, but they are also very ignorant, in the sense that they have very little social information, and the challenge of consensus algorithms is that of creating an algorithm that requires as little input of social information as possible.
Sure, the addresses for the validators should be reused. The more it's used the better.
>> Is this a problem that isn't present in Bitcoin? How can a brand new Bitcoin user securely join the network for the first time?
> It worries me that you can't answer this and you're working on a cryptocurrency.
I'm not asking because I don't know the answer. (I know the answer.) I'm asking because your answer will help me clarify the answer for you using the framework and terminology that you're comfortable with. And in this space, each term means different things in different contexts. :)
It's not trivial for a traitor to pretend to be all of the generals. Say there are validators A, B, C, D, and they have equal voting power. Given this configuration, it's impossible for A to be the proposer for 2 blocks in a row unless some of the other validators were absent. It's not just deterministic -- it's round-robin.
>Sure, the addresses for the validators should be reused. The more it's used the better.
That's not at all what I said, but this is besides the point.
>t's not trivial for a traitor to pretend to be all of the generals. Say there are validators A, B, C, D, and they have equal voting power. Given this configuration, it's impossible for A to be the proposer for 2 blocks in a row unless some of the other validators were absent. It's not just deterministic -- it's round-robin.
And you whitepaper allows for block creation with absent validators which makes it trivial because you have total control over the seed determining the next set of valdators. Even if you changed that, it would be trivial because you would only need to have last input into the seed one time to grind and generate a signature such that you are the exclusive validator.
You continue to assert the attack isn't feasible, but you aren't actually paying attention to what the reason I have told you twice now.
"It's not [feasible] for a traitor to pretend to be all the generals <no explanation>, say there are [arbitrary example]. Given this configuration it's infeasible for the attack to be performed <no explanation>. <Repetition of facts that are part of the reason stake grinding works including it's determininsticness>."
If you aren't aware of what stake grinding allows, it is an attack where you calculate the deterministic results of many many signatures (seeds) and finds signatures that result in you winning, allowing you to perform the attack once again.
> you have total control over the seed determining the next set of valdators
There are many possible algorithms for determining this seed, some of which do solve the stake grinding problem. I personally like the NXT algo; its basic approach in simplified form is:
No grind capabilities at all. No matter how many times you try, you will always generate the exact same result. The randomness basically comes from absentee proposers, so the only way to influence the outcome (not proposing) is costly as you sacrifice your reward and all other influence over the block beyond signing / not signing it.
>seed(block) = hash(seed(block.parent) + block.proposer.address)
No grind capabilities at all. No matter how many times you try, you will always generate the exact same result. The randomness basically comes from absentee proposers, so the only way to influence the outcome (not proposing) is costly as you sacrifice your reward and all other influence over the block beyond signing / not signing it.
That is trivially grindable. If the seed is your address then you create an additional address such that the next winning address is yours.
> control over the seed determining the next set of validators
That notion makes sense in something like PeerCoin or other preexisting PoS designs, but I don't see how that phrase applies to Tendermint. There is no "sampling" -- all validators must sign every block.
I think you underemphasize this point. My Slasher algo ( https://blog.ethereum.org/2014/01/15/slasher-a-punitive-proo... ) from way back then does use sampling; Tendermint does not, and thus gains a whole bunch of robustness properties at some cost to efficiency.
Well now you're defining a new system. You can't expect me to explain the security of an undefined system, so I am sticking to the system defined in your paper which states that not all validators need to sign every block.
The whitepaper says that +2/3 of the entire validator set (in voting power) must sign every block. In the ideal scenario (in other words, in the stable state) every validator does sign every block.
Yes, it is pointless for me to argue about an undefined system. If you mean to say that 67% of the validator set must sign every block, then you shouldn't argue with the premise that 100% must sign every block.
>They propose a modification to the Bitcoin protocol such that it can tolerate colluding groups that control up to 1/4 of the mining power – less than the previously assumed bound of 1/2 of Byzantine mining power which requires an honest mining majority that follows the prescribed protocol.
Selfish mining is tolerable but unideal. It is a bit misleading to claim that 25% block withholding attacks are as critical as 51% attacks.
>Users keep an account in the system, where the user’s account is identified by the user’s public key or address.
Promoting address reuse isn't a good idea.
>When a validator signs duplicitously, a short evidence transaction can be generated by anyone with the two conflicting commit-vote signatures
The assumption that an attacker will tell you he is attacking isn't a very strong one.
>A user can avoid long-range attacks by syncing their blockchain periodically within the bounds of the unbonding period.
This security assumption means means that no one can safely join the network and perform an initial sync. It isn't explained how they prevent long range attacks, but it can be assumed that they do it through disallowing reorgs.
Disallowing reorgs allows another consensus breaking attack. The entire purpose of reorgs is to maintain consensus, so if an attacker can create a blockchain of length bond-period, they can trick you so you don't achieve consensus with the "main" blockchain.
Another thing that is worrying about this whole idea is that 2/3 of validators must come to consensus on what block to use. If you choose a block and broadcast your signature, you cannot sign again. So how do the validators solve the Byzantine Generals Problem? How do they determine which block they will sign? They could go with the first block they see, but that leaves you open to sybil attackers running many nodes and sending validators different blocks in order to trick them into splitting their vote. These are the fundamental problems that consensus systems are meant to solve, consensus systems shouldn't have to have a 2nd consensus system to support itself.
>There are three types of votes: a prevote, a precommit and a commit
It seems the paper is attempting to solve the previously mentioned problem, however the most fundamental explanation for the problem with this is explained by Byzantine Generals Problem. The problem essentially involves you sending a message of "I'm ready to attack" back and forth, but because you cannot know they received the message, you cannot attack. If they reply with a "okay, we got the message" message, they don't know that you received the message, so they don't know whether to attack. Whether there is 1 back and forth, 3 as in a "prevote to attack", a "precommit to attack" and a "commit to attack", or 100 back and forths, you aren't achieving consensus, you're skirting the issue.
The next few sections explain the mechanism, but not how it actually solves Byzantine Generals Problem. with additonal assumptions based on things that require consensus including "If there are less than 1/3 in Byzantine voting power and at least one good validator decides on a block B, then no good validator will decide on any block other than B"
>Each round has a designated proposer chosen in round-robin fashion such that validators are chosen with frequency in proportion to their voting power
If the pseudo-random seed for the next round validators is information that can be decided by "previous" (or not so previous) validators, a validator can choose themselves. With a high number of validators, it may take a few thousand blocks for, say, a 3% stakeholder to randomly be selected for a sufficient number of votes to choose the next seed, but once they have ground[1] their first block and made themselves a majority of validators, they can repeat over and over and control the blockchain.
> Say that some good validators locked on block B and round R, and some good validators locked on block B0 at round R0, where R < R0. In this case the proof-of-lock from R0 included in a proposal by a good validator will eventually unlock those validators locked on R allowing the consensus process to continue.
This "solution" helps prevent 33% deadlocks, I agree, however now you can seed your stakegrinding attack with an incredibly small validator/voter %. Depending on the parameters of the system, you may only need to become a validator once to control the entire blockchain forever.
The system of voters has somewhat obscured it, but stake grinding[1] is still a major problem, in fact, it is probably the simplest and cheapest attack vector on the system defined in this paper.
In conclusion, this system has severe sybil problems, consensus problems, centralizations problems (in that one person could control the entire blockchain cheaply), structure problems (address reuse) and usability/security assumption problems (no one can safely sync with the network after the first bonding period is up?).
[1]Stake grinding is the process of changing data in a block to create a large number of new seeds (sometimes billions or trillions) until the seed designates you as the "winner", or in this case majority of validators.