Click here to load reader

Eth Scalability

  • View

  • Download

Embed Size (px)


Technical description of eth technology .

Text of Eth Scalability

  • Notes on Scalable Blockchain Protocols (version


    Vitalik Buterin (Ethereum)

    Reviewers and collaborators: Gavin Wood, Vlad Zamfir (Ethereum),Jeff Coleman (Kryptokit)

    March 14-31, 2015

  • Chapter 1


    Currently, all blockchain consensus protocols that are actively in use havea critical limitation: every node in the network must process every transac-tion. Although this requirement ensures a very strong guarantee of security,making it all but impossible for an attacker even with control over everynode participating in the consensus process to convince the network of thevalidity of an invalid transaction, it comes at a heavy cost: every blockchainprotocol that works in this way is forced to choose between limiting itself to alow maximum transaction throughput, with a resulting high per-transactioncost, and allowing a high degree of centralization.

    What might perhaps seem like a natural solution, splitting the relevantdata over a thousand blockchains, has the two fundamental problems thatit (i) decreases security, to the point where the security of each individualblockchain on average does not grow at all as total usage of the ecosystemincreases, and (ii) massively reduces the potential for interoperability. Otherapproaches, like merge-mining, seem to have stronger security, though inpractice such approaches are either very weak in more subtle ways [1] -and indeed, merged mined blockchains have successfully been attacked[2] -or simply fail to solve scalability, requiring every consensus participant toprocess every blockchain and thus every transaction anyway.

    There are several reasons why scalable blockchain protocols have provento be such a difficult problem. First, because we lose the guarantee that anode can personally validate every transaction in a block, nodes now nec-essarily need to employ statistical and economic means in order to makesure that other blocks are secure; in essence, every node can only fully keeptrack of at most a few parts of the entire state, and must therefore bea light client, downloading only a small amount of header data and notperforming full verification checks, everywhere else. Second, we need to beconcerned not only with validity but also with data availability; it is entirelypossible for a block to appear that looks valid, and is valid, but for which theauxiliary data is unavailable, leading to a situation where no other validator


  • can effectively validate transactions or produce new blocks since the neces-sary data is unavailable to them. Finally, transactions must necessarily beprocessed by different nodes in parallel, and given that it is not possible toparallelize arbitrary computation [3], we must determine the set of restric-tions to a state transition function that will best balance parallelizabilityand utility.

    This paper proposes another set of strategies for achieving scalability, aswell as some formalisms that can be used when discussing and comparingscalable blockchain protocols. Generally, the strategies revolve around asample-and-fallback game: in order for any particular block to be valid,require it to be approved by a randomly selected sample of validators, andin case a bad block does come through employ a mechanism in which anode can challenge invalid blocks, reverting the harmful transactions andtransferring the malfeasant validators security deposit to the challengers asa reward unless the challenge is answered by a confirmation of validity from amuch greater number of validators. Theoretically, under a Byzantine-fault-tolerant model of security, fallback is unnecessary; sampling only is sufficient.However, if we are operating in an anonymous-validator environment, thenit is also considered desirable to have a hard economic guarantee of theminimum quantity of funds that needs to be destroyed in order to cause acertain level of damage to the network; in that case, in order to make theeconomic argument work we use fallback as a sort of incentivizer-of-last-resort, making cooperation the only subgame-perfect equilibrium.

    The particular advantage that our designs provide is a very high degree ofgeneralization and abstraction; although other schemes using technologieslike auditable computation[5] and micropayment channels[6] exist for in-creasing efficiency in specific use cases like processing complex verificationsand implementing currencies, our designs require no specific properties ofthe underlying state transition function except for the demand that moststate changes are in some sense localized.

    Our basic designs allow for a network consisting solely of nodes boundedby O(N) computational power to process a transaction load of O(N2)(we generally use O(N ) to denote polylogarithmic terms for simplicity),under both Byzantine-fault-tolerant and economic security models; we thenalso propose an experimental stacking strategy for achieving arbitraryscalability guarantees up to a maximum of O(exp(N/k)) transactional load,although we recommend that for simplicity implementers attempt to perfectthe O(N2) designs first.


  • Chapter 2


    Definition 2.1 (State). A state is a piece of data.

    Definition 2.2 (Transaction). A transaction is a piece of data. A transac-tion typically includes a cryptographic signature from which one can verifyits sender, though in this paper we use transaction in some contexts moregenerally.

    Definition 2.3 (State Transition Function). A state transition function isa function APPLY (, ) {}, where and are states and is atransaction. If APPLY (, ) = , we consider invalid in the context of. A safe state transition function SAFE(APPLY ) = APPLY is definedby APPLY (, ) = { if APPLY (, ) = else APPLY (, )}.

    For simplicity, we use the following notation:

    APPLY (, ) becomes + =

    SAFE(APPLY )(, ) becomes ++ =

    Given an ordered list T = [1, 2, 3...] and a state , we define +T = + 1 + 2 + ...

    Given an ordered list T = [1, 2, 3...] and a state , we define 0 = and i+1 = i + +i+1. We denote with T \ the ordered list of alli T such that i + i 6= .

    T is valid in the context of if T = T \

    [n] is the set {1, 2, 3.....n}

    Definition 2.4 (Replay-immunity). A state transition function is replay-immune if, for any , and T , ++T+ = . Generally, we assume thatall state transition functions used in consensus protocols are replay-immune;otherwise, the protocol is almost certainly vulnerable to either exploits ordenial of service attacks.


  • Note. Replay-immunity is not a sufficient condition for being double-spend-proof. Descriptions of blockchains as anti-replay oracles [4] emphasize thefunctionality of preventing the successful application of more than one mem-ber of a particular class of transactions, where the class is usually defined asthe transactions that spend a particular unspent balance, which is differentfrom the kind of replay-immunity that we describe here.

    Definition 2.5 (Commutativity). Two transactions or lists of transactionsT1, T2 are commutative in the context of if + T1 + T2 = + T2 + T1.

    Definition 2.6 (Partitioning scheme). A partitioning scheme is a bijectivefunction P which maps possible values of to a tuple (1, 2...n) for afixed n. For simplicity, we use [i] = P ()[i], and if S is a set, [S] = {i :P ()[i] for i S}. An individual value [i] will be called a substate, and avalue i will be called a substate index.

    Definition 2.7 (Affected area). The affected area of a transaction (or trans-action list) in the context of (denoted AFFECTED(, )) is the set ofindices S such that (+ )[i] 6= [i] for i S and (+ )[i] = [i] for i / S.

    Definition 2.8 (Observed area). The observed area of a transaction (ortransaction list) in the context of (denoted OBSERV ED(, )) is de-fined as the smallest set of indices S such that for any where [S] = [S]and = + , we have [S] = [S] and [i] = [i] for i / S.

    Note. The observed area of a transaction, as defined above, is a superset ofthe affected area of the transaction.

    Theorem 2.0.1. If, in the context of , the observed area of T1 is disjointfrom the affected area of T2 and vice versa, then T1 and T2 are commutative.

    Proof. Suppose a partition of [n] into the subsets a = AFFECTED(, T1),b = AFFECTED(, T2) and c = [n] \ a \ b. Let us repartition as (, , )for [a], [b] and [c] respectively. Suppose that + T1 = (

    , , ), and + T2 = (,

    , ). Because a is outside the observed area of T2 (as it isthe affected area of T1), we can deduce from the previous statement that(x, , ) + T2 = (x,

    , ). Hence, + T1 + T2 = (, , ) + T2 = (

    , , ).Similarly, because b is outside the observed area of T1, + T2 + T1 =(, , ) + T1 = (

    , , ).

    Definition 2.9 (Disjointness). Two transactions are disjoint if there existsa partition such that the observed area of each transaction is disjoint fromthe affected area of the other. If two transactions are disjoint then, as shownabove, they are commutative.

    Note. Disjointness does not preclude the observed areas of T1 and T2 fromintersecting. However, in the rest of this paper, we will generally follow a


  • stricter criterion where observed areas are not allowed to intersect; this doessomewhat reduce the effectiveness of our designs, but it carries the benefitof allowing a substantially simplified analysis.

    Definition 2.10 (Block). A block is a piece of data, encoding a collection oftransactions typically alongside verification data, a pointer to the previousblock, and other miscellaneous metadata. There typically exists a block-levelstate transition function APPLY , where APPLY (, ) = + T , where Tis a list of transactions. We of course denote APPLY (, ) with + . Ablock is valid if T is valid in the context of , and if it sati

Search related