31
Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns Hopkins University [email protected] Ian Miers Johns Hopkins University [email protected] Matthew Green Johns Hopkins University [email protected] Abstract In this work we investigate the problem of using public consensus networks – exemplified by systems like Ethereum and Bitcoin – to perform cryptographic functionalities that involve the manipulation of secret data, such as cryptographic access control. We consider a hybrid paradigm in which a secure client-side functionality manages cryptographic secrets, while an online consensus network performs public computation. Using this approach, we explore both the constructive and potentially destructive implications of such systems. We first show that this combination allows for the construction of stateful interactive functionalities (including general computation) from a stateless client-side functionality, which can be implemented using inexpensive trusted hardware or even purely cryptographic functionalities such as Witness Encryption. We then describe a number of practical applications that can be achieved today. These include rate limited mandatory logging; strong encrypted backups from weak passwords; enforcing fairness in multi-party computation; and destructive applications such as autonomous ransomware, which allows for payments without an online party. 1 Introduction Cryptography is a powerful tool for managing access to protected data. From disk encryption [Bla94, Bla93, The] to modern database encryption [PRZB11], cryptography is often used to restrict which data objects can be accessed by users. Many modern access control systems perform dynamic, or interactive cryptographic access control: employing a trusted server or a local tamper-resistant module to store and selectively deliver cryptographic keys to authorized clients when various conditions are met [Krs16,KRS + 03]. These solutions have many advantages: they can enforce centralized logging and revocation, and can provide assurances such as restricting the user to a limited of authentication attempts [App16, Sko16, Krs16, Pro]. Unfortunately, such dynamic access control systems have corresponding disadvantages: in the online case, they require the construction of expensive centralized infrastructure that is vulnerable to remote attacks and DoS. In the purely offline case (e.g., using trusted local hardware) it is challenging to enforce guarantees such as mandatory file access logging, or to revoke stolen devices. Moreover, sophisticated attackers may be able to “rewind” the non-volatile state of some inexpensive devices [Sko16], disabling security guarantees such as passcode guessing limits. In recent years a new class of distributed system has evolved. Loosely categorized as consensus networks and exemplified by Bitcoin and Ethereum [Nak12, etha], these systems employ highly decentralized and volunteer-run networks to evaluate public rules that manage access to various resources. The most famous of these systems, Bitcoin [Nak12], uses consensus to enforce monetary payment rules. However, more recent systems have been proposed for applications ranging from identity management [nam16, GGM13] to resource management in distributed systems [JMH15, GRFJ14, BP15]. The benefits of using consensus networks are twofold: (1) these volunteer-run networks are readily available and thus do not require the deployment of expensive new infrastructure, and (2) compromising such networks often requires the attacker to deploy significant resources, computational or otherwise. 1

Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

Managing Secrets with Consensus Networks:Fairness, Ransomware and Access Control

Gabriel KaptchukJohns Hopkins University

[email protected]

Ian MiersJohns Hopkins University

[email protected]

Matthew GreenJohns Hopkins University

[email protected]

Abstract

In this work we investigate the problem of using public consensus networks – exemplified by systemslike Ethereum and Bitcoin – to perform cryptographic functionalities that involve the manipulation ofsecret data, such as cryptographic access control. We consider a hybrid paradigm in which a secureclient-side functionality manages cryptographic secrets, while an online consensus network performspublic computation. Using this approach, we explore both the constructive and potentially destructiveimplications of such systems. We first show that this combination allows for the construction of statefulinteractive functionalities (including general computation) from a stateless client-side functionality, whichcan be implemented using inexpensive trusted hardware or even purely cryptographic functionalities suchas Witness Encryption. We then describe a number of practical applications that can be achieved today.These include rate limited mandatory logging; strong encrypted backups from weak passwords; enforcingfairness in multi-party computation; and destructive applications such as autonomous ransomware, whichallows for payments without an online party.

1 IntroductionCryptography is a powerful tool for managing access to protected data. From disk encryption [Bla94, Bla93, The]to modern database encryption [PRZB11], cryptography is often used to restrict which data objects canbe accessed by users. Many modern access control systems perform dynamic, or interactive cryptographicaccess control: employing a trusted server or a local tamper-resistant module to store and selectively delivercryptographic keys to authorized clients when various conditions are met [Krs16,KRS+03].

These solutions have many advantages: they can enforce centralized logging and revocation, and can provideassurances such as restricting the user to a limited of authentication attempts [App16, Sko16,Krs16, Pro].Unfortunately, such dynamic access control systems have corresponding disadvantages: in the online case,they require the construction of expensive centralized infrastructure that is vulnerable to remote attacks andDoS. In the purely offline case (e.g., using trusted local hardware) it is challenging to enforce guarantees suchas mandatory file access logging, or to revoke stolen devices. Moreover, sophisticated attackers may be ableto “rewind” the non-volatile state of some inexpensive devices [Sko16], disabling security guarantees such aspasscode guessing limits.

In recent years a new class of distributed system has evolved. Loosely categorized as consensus networksand exemplified by Bitcoin and Ethereum [Nak12, etha], these systems employ highly decentralized andvolunteer-run networks to evaluate public rules that manage access to various resources. The most famous ofthese systems, Bitcoin [Nak12], uses consensus to enforce monetary payment rules. However, more recentsystems have been proposed for applications ranging from identity management [nam16,GGM13] to resourcemanagement in distributed systems [JMH15,GRFJ14, BP15]. The benefits of using consensus networks aretwofold: (1) these volunteer-run networks are readily available and thus do not require the deployment ofexpensive new infrastructure, and (2) compromising such networks often requires the attacker to deploysignificant resources, computational or otherwise.

1

Page 2: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

Custodian User Contract

stateless, holds secrets

stateful,no secrets

broadcastchannel

Figure 1: Custodian-Contract interaction. The Custodian interacts with a Contract functionality via a(possibly adversarial) user. The Contract holds state and has access to a public input/output channel (e.g.,an append-only blockchain ledger), while the Custodian is a local trusted functionality that holds secret data.

While public consensus networks have evolved to allow the execution of arbitrary user-defined programs(called “smart contracts”) [etha], these networks have a critical limitation that restricts their usefulness forcertain interactive cryptographic protocols. Namely, public consensus systems do not provide the ability tosecurely store and operate on secret data such as cryptographic keys. While hypothetically it might be possibleto upgrade these systems to incorporate secret storage, in practice this is challenging. It would require eithera large amount of trust to be placed in a small number of reliable nodes, or else it would necessitate new andcomplex multiparty computation and proactive secret sharing protocols to distribute and verify trust acrossthe larger (and constantly churning) volunteer network. This challenge greatly limits the usefulness of thesenetworks in controlling access to certain types of encrypted data.

Our contributions. In this work we take a step towards addressing the management of secret data using publicconsensus networks. The main building block in our proposals is a new paradigm we refer to Custodian-Contract interaction. This proposal divides each interactive secret computation into a protocol run betweentwo parties: a stateless, client-side functionality called the Custodian that securely manages secrets, anda Contract that enforces consensus rules in a public network. The Custodian may be implemented usingtamper-resistant hardware, or cryptographic obfuscation techniques. The Contract may be implementedusing a public online consensus network. The communication between each functionality is facilitated by apossibly adversarial User, as shown in Figure 1.

We observe that this idea has some powerful implications. First, by instantiating the Contract ona computationally-supported consensus network (e.g., a smart contract system that uses proof-of-workblockchains to achieve consensus), we immediately obtain a means for the Custodian to cryptographicallyvalidate the authenticity of decisions made by the consensus network. In this setting, the Custodianfunctionality can, using only modest computational effort, verify a fragment of the blockchain to ensure(with reasonable probability) that it is a legitimate consensus output.1 Under the economic assumptionthat the consensus network possesses significantly more computational power than any likely adversary, thisensures that interactions between the Contract and Custodian are authenticated.2 This holds even when theadversary (i.e., the User) provides the communication channel between the two components.

Second, we observe that the combination of Contract and Custodian can achieve properties that may notbe achievable using trusted hardware alone. Interaction with a Contract provides the ability to conditioncertain secret calculations on public actions taken on the consensus network. In particular, this allows us tocondition decryption operations on the publication of certain messages, including monetary payments madeto specific parties on the network. This enables practical applications such as enforced file access logging,device revocation, and even the enforcement of payments in exchange for delivery of secrets. In malicioushands, this raises the specter of autonomous ransomware that operates verifiably and without any need for aC&C or secret distribution center. Finally, we show that these techniques can be used to facilitate enforced

1A similar idea was proposed for implementing “sidechains” in Bitcoin [BCD+14], and is related to Simplified PaymentVerification in Bitcoin clients.

2As an example of the viability of this assumption, we note that the current Bitcoin network executes approximately 270

SHA2 hashes every 10 minutes.

Page 3: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

completion of multiparty computation (MPC), which provides a new solution to an well-known problemrelated to fairness of MPC protocols [Cle86].

As a practical matter, we show that – given access to an appropriate smart Contract network – anextremely lightweight and stateless Custodian (such as the hardware proposed in [NFR+17]) can now be usedto perform interactive and stateful calculations involving secret data. We show that in practice this simpleenhancement enables us to construct complex and access control policies that depend on state, such as limitingthe number of login attempts to a password-encrypted device – even when an attacker is able to modify andrewind the contents of the Custodian’s local storage [Sko16]. This capability is particularly exciting when theCustodian is implemented using purely software techniques, such as cryptographic obfuscation [BCP14, BR14]or inexpensive stateless hardware co-processors [NFR+17,DMMQN11].

1.1 ApplicationsTo motivate our techniques, we describe a number of useful applications that can be implemented using theCustodian-Contract paradigm, including both constructive and potentially destructive techniques.

Limiting password guessing. Many cryptographic access control systems employ passwords to controlaccess to encrypted filesystems [App16, Pro] and cloud backup images (e.g., Apple’s iCloud Key-chain [Krs16]). This creates a tension between the requirement to support easily memorable passwords(such as device PINs), while simultaneously preventing attackers from simply guessing users’ relativelyweak passwords [Bon12,USB+15].3 One approach to addressing this is to incorporate tamper-resistanthardware such as onboard co-processors [App16, Pro,ARM] and Hardware Security Modules [Krs16].However, these systems are expensive (particularly in the online backup case [Krs16]) and may failcatastrophically when an attacker can rewind state.4 We show that using our paradigm we can safelyenforce passcode guessing limits even when using inexpensive hardware that cannot guarantee immutablestate [Sko16].

Fairness in multi-party computation Multi-party computation (MPC) is a powerful cryptographicprimitive that allows many players to cooperatively compute arbitrary functionality without revealingtheir individual inputs. A well known limitation of MPC protocols is that of ensuring fairness. When atleast half the players in a computation are malicious, an attacker may be able to abort the computationbefore the honest parties learn the output [Cle86]. As a novel application of our technique, we constructa new primitive called fair encryption, that ensures a ciphertext can be decrypted if and only if it hasbeen published on a blockchain. Using this primitive we show how to construct a new class of fair MPCprotocols in which the malicious party provably learns no output unless all parties learn the output ofthe protocol. This advances a line of work on using blockchain-based protocols to penalize misbehavingparties in MPC protocols [GK10,ADMM14].

Autonomous ransomware. Ransomware is a class of malware that encrypts a victim’s files, then demandsa monetary ransom in exchange for decryption. Modern ransomware platforms are tightly integratedwith cryptocurrencies such as Bitcoin, which act as both the ransom currency and a communicationchannel to the attacker [Sin16]. Once a system has been infected, users must transmit a an encryptedkey package along with a ransom payment to the attacker, who responds with the necessary decryptionkeys. The need to deliver secret keys is a fortunate weakness in the current ransomware paradigm,as it is both costly and increases the probability that she will be traced by law enforcement [Goo13].More critically, this makes ransom payment a risky proposition for the victim, who cannot verify apriori that paying the ransom will restore access to data.5 In this work we remark on a potentially

3This is made more challenging due to the fact that manufacturers have begun to design systems that do not include atrusted party – due to concerns that trusted escrow parties may be compelled to unlock devices [App].

4See [Sko16] for an example of how such systems can be defeated when state is recorded in standard NAND hardware, ratherthan full tamper-resistant hardware.

5Indeed an emerging class of pseudo-ransomware has exploited this flaw to extort money without actually providing theability to decrypt the locked files (in some cases unintentionally [Tre16b,Cim15,Tre16a]).

Page 4: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

destructive application of the Custodian-Contract paradigm: the creation of ransomware that operatesautonomously – from infection to decryption – with no need for remote parties to deliver secret keys.This ransomware employs local trusted hardware or obfuscation to store a decryption key for a user’sdata, and conditions decryption of a user’s software on payments made on a public consensus network.

Mandatory logging for local file access. In some corporate and enterprise settings, clients must publishaccess logs for sensitive files. This requires that each file access be recorded by some online system.We propose to use a Custodian-Contract interaction to mandate logging of each file access before thenecessary keys for an encrypted file can be accessed by the user. In this setting, log entries may bepublic (i.e., visible to all parties on the network) or they may be encrypted to a specific managementauthority.

1.2 IntuitionWe now briefly present the intuition behind our construction. Recall that the goal is to perform interactivecomputation on secret data, using a stateless but trustworthy computing environment (the Custodian) incombination with a stateful smart contract. For some secret data s, we will model the desired computationas a probabilistic reactive functionality of the form:

NextSteps(Inputi,STi−1; r)→ (Outputi‖Pi,STi)

On user input Inputi, previous state STi−1, and random coins r this function produces a user output Outputi,and new state STi. (For some applications described later in this work, we subdivide the output into a useroutput for private delivery to the user, and a public output Pi that is intended to be broadcast to a network.)

In our model a (possibly adversarial) User is responsible for invoking each round of computation. A keydistinction from previous work (e.g., [NFR+17]) is that the user adaptively selects the input to each round ofthe computation, possibly as a function of previous outputs.6 We do not require that the User completes agiven computation. Instead we seek to achieve the following intuitive properties: (1) for each round, the userlearns only the output, but no intermediate function of the computation, and (2) the user cannot rewindthe functionality or access the state held in between rounds. We seek to achieve these goals even in settingswhere the Custodian has no secure means to keep state between rounds of computation, and must outsourcethis function to the User. Finally, as a third goal, we wish to construct a means to condition computation onthe public broadcast of certain outputs to a consensus network.

Using the contract to keep state. A first idea behind our first construction is to use the Contract as a means toenforce the order of a computation on the (stateless) Custodian; this prevents the adversary from rewindingand executing earlier rounds of the functionality on alternative inputs. As an auxiliary benefit, the Contractprovides a relatively censorship-resilient channel that the Custodian may access use to receive or transmitdata for public consumption. This allows the Custodian to condition operation on network broadcasts as wellas other types of public transaction supported by the Contract, such as monetary payments.

A protocol sketch. We begin with a simplified version of our main construction. This approach dividesthe computation into two portions: one executed by the Custodian and the other by the Contract. Ourprotocol operates under the key assumption that Contract outputs can be authenticated, e.g., by verifying aproof-of-work chain produced by a consensus network. The User acts as a communication channel betweenthe parties, and is responsible for both selecting the input to each round of computation and invoking theCustodian and Contract on specific values. We assume a trusted setup in which the Custodian is provisionedwith a computation secret s as well as an additional secret key S sampled uniformly from {0, 1}λ, while theContract state is initialized to be empty. Let C0 ← ε, STenc0 ← ε.

For i ≥ 1, we execute the ith round of the computation as follows:6We remark that this is the only appropriate model for a dynamic cryptographic access system, given that the identity of

each file to be accessed may depend on the result of previous file accesses.

Page 5: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

1. The user first commits to the messages (Inputi, Ci−1) using a secure commitment scheme, where Inputiis the user’s chosen input to the ith round of computation and Ci−1 is the commitment generatedduring the previous round of the protocol (or ε initially).

2. The user transmits the resulting commitment Ci to the smart Contract.7 The smart Contract recordsthe commitment Ci and accesses its local state to obtain the commitment Ci−1 that the user transmittedduring the previous call to the Contract (or C0 = ε at the first round). It then returns the pair (Ci−1, Ci)to the User, along with an authenticator πi that verifies the provenance of the returned pair. In aprivate consensus network, this authenticator might be a signature calculated over the returned value.In a public consensus network, the authenticator may comprise a proof-of-work computed over theresulting transaction and some number of subsequent blocks.

3. To complete this round of the calculation, the user now invokes the Custodian on the tuple (Ci−1, Ci)and πi, as well as Inputi and the commitment randomness used to formulate Ci. The user also providesthe Custodian with an encrypted state ciphertext STenci−1 that was produced during the previous callto the Custodian (or ε on the first round).

4. The Custodian verifies the commitment Ci using the given inputs and random coins. It then uses itsinternal secret S as the key for a pseudorandom function PRF, in order to derive the following threeintermediate secrets. These secrets are defined as follows:

(a) A decryption key for the previous state ciphertext STenci−1 is derived using PRFS on input Ci−1.(b) An encryption key for the new state ciphertext STenci is derived using PRFS on input Ci.(c) A set of random coins ri for the computation is derived by keying a second pseudorandom generator

on S and STenci.

The Custodian decrypts the previous state ciphertext, and if this succeeds it invokes the NextStepfunction on the given input, coins and state. Finally, the Custodian encrypts the new state STi usingthe newly generated key. The Custodian outputs Outputi and STenci to the user, or the distinguishedsymbol ⊥ if any operation fails.

Our main observation is that if the Custodian is honest, then each pair (Ci−1, Ci) can be viewed as acommitment to an entire computational trace, including user inputs, random coins and state. Provided thatit is infeasible to “forge” an authenticator tag πi, then the user can replay specific inputs to the Custodian(to receive the same output), but can never rewind and execute the Custodian on new inputs. Successin this would require the user to forge the authenticator tag πi produced by the Contract, or to defeat acryptographic primitive such as the commitment scheme or authenticated encryption.

Finally, we remark that the protocol above achieves only one of our stated goals: ensuring that thestateless Custodian can perform stateful interactive computation. However, it does not allow the Custodianto publish outputs via the Contract. Adding this capability is simple: we simply create a new field for theCustodian’s output Pi, and require that this output be presented to the Contract along with the commitmentto the next round’s input. The Contract response is simply updated to include the published value. Anadversarial user can still censor or even forge the outputs from this Contract; however, from this point sheloses the ability to continue the computation.

How to instantiate the Custodian? Thus far we have discussed several mechanisms for instantiatingthe smart contract. However, the Custodian is more exotic, as it must be capable of securely performingsecret computations without revealing any intermediate function of the computation to the user.

In §5 we discuss two main approaches to constructing a viable Custodian functionality. The first, andmost practical approach is to use secure hardware. We stress that, as the requirements on the Custodian

7In this example we will assume that the User possesses a means to authenticate herself to the Contract. In practice, existingsmart Contract systems such as Ethereum provide this natively, by allowing a User to instantiate contracts that embed specificpublic keys. The User may hold the corresponding signing key.

Page 6: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

are relatively simple, the necessary hardware may be surprisingly inexpensive. In particular, this hardwaremerely needs to embed a single unchanging secret that can be installed at the factory, and requires no abilityto retain immutable state between round. Indeed, stateless hardware designs have been widely deployed bymanufacturers such as Apple for the purpose of cryptographic file encryption [App16], and these systemshave also been shown to be vulnerable to attacks that roll back program state [Sko16].

A more intriguing possibility is that the Custodian might be instantiated using purely software techniques.A promising candidate technology for this approach is witness encryption (WE) [GGSW13,GKP+13a] as wellas a more powerful variant known as functional witness encryption (FWE) [BCP13]. There are many caveatsto using these techniques, and we offer them only as a conjecture. We explore this area more fully in §5.

1.3 Outline of this paperThe remainder of this paper proceeds as follows. In §2 we define a Custodian-Contract scheme (CCS) andpresent security definitions for the primitive. In §3 we present our main construction. In §4 we describeseveral applications of the CCS primitive, including several related to cryptographic access control and MPCfairness, and in §5 we discuss practical instantiations of the CCS components. In §6 we implement ourproposals and present experimental results. Finally, §7 surveys related work.

2 Definitions

Notation: Let λ be a security parameter. Let ` be a non-negative integer polynomial in λ and let ν(·)indicate a negligible function. We use the notation A

c≈ B to denote that the distributions A and B are

computationally indistinguishable.

Protocol Parties. A Custodian-Contract Interaction is a protocol between three parties: the custodianfunctionality C, the contract functionality N , and a user U . We now describe the operation of these twoelements.

The contract N . The trusted contract functionality models a smart contract system. This can be viewedas a stateful functionality that initializes with state z0 = ε. For i ≥ 1, the ith round of computation isinvoked on an input selected by the user U and previous zi−1 stored by the contract. If the computationis successful, the contract records a new internal zi and returns Outputi along with a publicly-verifiableauthentication tag πi. We require that N has access to a trustworthy broadcast output path, e.g.,that once invoked it can broadcast data reliably to the network. We recall that N does not possesscryptographic secrets, i.e., all of its state is public.

The custodian C. The trusted custodian models a cryptographic obfuscation system or a piece of statelesssecure hardware hardwired with a deterministic functionality and is initialized with a pair of secrets s, S.When the custodian is invoked on by the user U , it calculates an output the result (or a distinguishederror symbol ⊥). We require that the description of C’s operations are public , and that the adversaryviews (and controls) only the inputs and outputs.

The user U . The user is a (possibly adversarial) party that invokes both the custodian and the contractfunctionalities. The user selects the inputs to each round of computation and receives the user outputs.It provides a communication channel between the other parties.

The functionality F . For flexibility we define a slightly more robust computing functionality than theone given in the intuition for this paper. This definition breaks the computation into two distinct algorithms:a secret portion NextStep and a public algorithm DoContract that can be run by the contract.8 This morecomplete definition allows us to implement functionalities that require additional capability, such as the

8In practice, functionalities may employ a “dummy” contract routine that simply publishes and outputs its inputs.

Page 7: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

ability to collect public data (as in [ZCC+16]), or to interface with other smart contracts running on thesame contract platform.

More formally, we define the computing functionality F as a pair (NextStep,DoContract) where eachalgorithm is deterministic and has the input/output interface described below:

NextSteps(Inputi,STi−1; ri)→ (Outputi‖Pi,STi). On input a user input Inputi, a previous state STi−1, andrandom coins ri, this algorithm produces a user output Outputi (including an optional public output Pi)and new state STi.

DoContract(Pi−1, Ui, zi−1) → (Di, zi). On input a previous custodian’s public output zi−1 and previouscontract state zi−1, outputs a result Di and updated contract state zi.

2.1 Custodian-Contract SchemeA Custodian-Contract scheme Σccs consists of a tuple of algorithms (SetupF ,ExecuteCustodianS,F ,ExecuteContractF ,ExecuteUserF ). The interface to these algorithms is given in Figure 2.

SetupF (1λ). On input a security parameter λ and a functionality F , outputs the secrets (S, s) for the custodian

and initializes STenc0 ← C0 ← ε.

ExecuteCustodianS,s,F (Inputi,STenci−1, ri, Oi, πi). This deterministic algorithm is parameterized by the secrets(S, s) and a functionality F . At the ith round, this algorithm takes a program input Inputi and an encryptedprogram state STenci−1 along with the commitment randomness ri and an output from the contract (Oi, πi).It outputs a new encrypted state along with user and public outputs (STenci,Outputi, Pi)

ExecuteContractF (Ci, Ui, Pi−1, zi−1). This algorithm is parameterized by a functionality F . At the ith round,this algorithm takes a commitment Ci, an optional user input to the contract Ui, and a previous custodianoutput Pi−1 from the user along with previous contract state zi−1. It outputs a tuple (Oi, πi).

ExecuteUserF (Inputi, Ci−1, ri). This algorithm is parameterized by a functionality F . At the ith round, thisalgorithm takes a value Inputi produced by the user, and outputs a string Ci.

Figure 2: Definition of a Custodian-Contract scheme (CCS).

The Custodian-Contract protocol. A Custodian-Contract scheme is used to instantiate an interactiveprotocol between the Custodian, User and Contract. Figure 4 illustrates a single round of this protocol. Toinitialize each interactive computation, a trusted party first runs Setup on input the functionality F and asecurity parameter, to generate the secrets (S, s) used by the custodian. The contract and custodian previousstates are initialized to ε. For each round i ≥ 1 the protocol now proceeds as follows:

1. The user U first runs the algorithm ExecuteUser on a first input Inputi, to produce a commitment Ci.

2. The user then invokes the contract functionality ExecuteContract on Ci, their own input Ui, and aprevious custodian output Pi−1 (if available), to produce the tuple (Oi, πi).

3. To complete the round of computation, the user now invokes ExecuteCustodian on the program input,previous encrypted state (if available) and the output of the contract functionality.

2.2 Correctness and Security

Correctness. Intuitively, correctness for a Custodian-Contract scheme is defined in terms of the parameterizedfunctionality F . At each step of the Custodian-Contract interaction, the contract and custodian shouldoutput the same outputs as an oracle implementing F .

Page 8: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

Security. We define security for a Custodian-Contract interaction using a real/ideal-world definition. Thisdefinition specifies two experiments: a Real experiment in which the adversarial user interfaces with oraclesthat implement the custodian and contract functionalities respectively, and an Ideal experiment that modelsthe correct and stateful operation of the underlying functionality F . Because we plan to implement thecontract functionality on the blockchain, the authenticity of contract outputs are protected economically. Inthis setting it is possible that a powerful attacker will succeed in forging a small number of contract calls, eachone at a great expense. Because we cannot prevent these forgeries altogether, we require that the security ofa scheme minimally degrades “gracefully” in the face of such forgeries. We address this using following “bestpossible” definition. An attacker who forges qforge separate contract outputs obtains the same utility as anattacker who is able to rewind and execute the NextStep algorithm exactly qforge times.

Intuitively, our goal is to ensure that even an adversarial user U , with oracle access to a forgery oracle F ,operating the custodian and contract cannot learn any information beyond their normal, expected inputsand outputs. The forgery oracle will output a validating authenticator for π on up to qforge inputs from theuser. We define security using a simulation-based definition: we require that for all p.p.t. adversaries U inthe Real experiment, there exists a simulator SOF,S,F (·) that plays the Ideal experiment, such that no p.p.t.algorithm can distinguish the output of U from the output of S (See Appendix A for formal definitions ofboth experiments).

With these considerations in mind, we now present our main security definition:

Definition 2.1 (Simulation security) We say that a Custodian-Contract scheme Σccs is simulation securefor functionality F if ∀ p.p.t. real-world adversarial users U and ∀ non-negative integers q, qforge, there existsa p.p.t. ideal-world user SOF,S,F (·) such that Real(Σccs,F ,U , q, qforge) is computationally indistinguishablefrom Ideal(F ,SOF,S,F (·), q, qforge).

Privacy against third parties. The definitions above address the security of a Custodian-ContractInteraction scheme against a dishonest user. However, we must also provide privacy for an honest user againstthird parties who may have read access to the blockchain. We address this using a separate user privacydefinition. Intuitively, under the assumption of an honest contract and user, we require that a third partyshould learn nothing from the inputs and outputs of the contract beyond what the user would normally learnfrom the public outputs of the computation. We address this more fully in Appendix A.1.

Page 9: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

3 Our Construction

Algorithm 1: Setup

Data: Input: 1λ

Result: Output: Secret Ss

$← {0, 1}m, S $← {0, 1}`, C0 ← ε,STenc0 ←ε, zi−1 ← ε

pp← CSetup(1λ)Output pp, s, S, C0,STenc0, zi−1

Algorithm 2: ExecuteUser

Data: Input: (Inputi, Ci−1, ri)Result: Output: Ci// Commit to the user input and previouscommitmentCi ← Commit(pp, Inputi‖Ci−1; ri)Output Ci

Algorithm 3: ExecuteContract

Data: Input: (Ci, Pi−1, Ui), State:(Ci−1, zi−1)

Result: Output (Oi, πi) or ⊥(Di, zi)← DoContract(Pi−1, Ui, zi−1)Oi ← (Ci−1‖Ci‖Di‖Pi−1)Compute πi over OiOutput (πi, Oi)

Algorithm 4: ExecuteCustodian

Data: Input: (STenci, Inputi, ri, [πi, Oi]),Secrets: S, s, Max state length: n

Result: Output: (STenci,Outputi) or ⊥Assert(Verify(Oi, πi) = 1)Parse Oi → (C ′i−1‖C ′i‖Di‖Pi−1)Assert(C ′i = Commit(pp, Inputi‖Ci−1; ri))if STenci−1 6= ε then

ki−1 ← PRFS(“enc”‖C ′i−1)STi−1 ← Unpad(Decrypt(ki−1,STenci−1))

elseif Ci−1 = ε then

STi−1 ← ε

elseHalt and Output ⊥

// Perform the computationri ← PRG(PRFS(“rand”‖C ′i)) ;(Outputi,STi, Pi)← NextSteps(Inputi‖Di‖Pi−1,STenci−1; ri) ;ki ← PRFS(“enc”‖C ′i)STenci ← Encrypt(ki,Padn(STi))Output (STenci,Outputi, Pi)

Figure 3: Our main construction Σccs.

In this section we present a specific construction of a Custodian-Contract scheme. Our construction makesuse of the following cryptographic primitives. In addition to the standard cryptographic primitives, we definean abstract “authenticator” primitive that will be used to authenticate the validity of a consensus transcript.We discuss this final primitive in more detail below.

Commitment schemes. Let Σcom = (CSetup,Commit) be a commitment scheme where CSetup generatespublic parameters pp. The algorithm Commit(pp,M ; r) takes in the public parameters, a message M , alongwith random coins r, and outputs a commitment C 6= ε which can be verified by re-computing the commitmenton the same message and coins.

Authenticated symmetric encryption. We require a symmetric encryption scheme consisting of the(possibly probabilistic) algorithms ΣAE = (Encrypt,Decrypt) where each accepts a key uniformly sampledfrom {0, 1}`. We require a scheme that meets the AE definition of Rogaway [Rog02]. This requires thata ciphertext is indistinguishable from a random bitstring even in a model where the attacker can queryencryption and decryption oracles. We also employ a padding scheme (Padn,Unpad) that pads a plaintext tosome chosen bitlength n.

Pseudorandom Functions and Generators. Our construction uses a pseudorandom function family(PRF) PRF that in input a seed and some value in {0, 1}∗ outputs a string in the range {0, 1}` as well as apseudorandom generator (PRG) that on input a seed in {0, 1}` outputs a stream of pseudorandom bits.

Page 10: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

Contract authenticators. Finally, we employ a primitive that we refer to as a contract authenticator. Anauthenticator is a publicly verifiable authentication primitive that allows any party to verify the authenticityof a message issued by the contract.9 In public consensus networks, the authenticator may be constructedusing a computational proof-of-work (see §5) computed over each contract output. We require that it isdifficult to forge a pair a new pair (m,π) even when the attacker is given access to an oracle for authenticatingchosen messages. We refer to this definition as SUF-AUTH (this is analogous to the SUF-CMA definition usedfor signatures). We also consider a variant of this definition called qforge-SUF-AUTH in which the adversary isallowed to produce at most qforge forged messages.

Custodian User Smart Contract

Ci, Pi−1, Ui

ExecuteContract(·)πi, Oi

(STenci, Inputi, ri, [πi, Oi])

ExecuteContract(·)STenci+1,Outputi‖Pi

ExecuteUser(·)Ci+1, Pi, Ui+1

Figure 4: Illustration of a single round of a Custodian-Contract interaction. The User runs the ExecuteUseralgorithm on a computation input Inputi and random coins ri, then invokes the Contract on the resultingcommitment Ci−1, their own input Ui, and an (optional) previous Custodian public output Pi−1. The contractreturns an authenticator/output pair (πi, Oi). The User now invokes the Custodian on all of the above valuesas well as a previous encrypted state STenci−1 to produce a user output Outputi, a new public output Pi,and a new encrypted state STenci.

3.1 Main ConstructionWe now present our main construction for a Custodian-Contract scheme (CCS) and address its security.Recall that a CCS consists of four algorithms with the interface described in 2. We present each of thesealgorithms in Figure 3.

We address the correctness of the scheme in Appendix B. We now proceed to our main security theorem.

Theorem 3.1 If Σcom is a secure commitment scheme; ΣAE satisfies the strong AE definition of [Rog02];PRF and PRG are each pseudorandom; and contract authenticators are qforge-forgeable, then for all allowedfunctionalities F and qforge ≥ 0 the scheme Σ = (Setup,ExecuteCustodian,ExecuteContract,ExecuteUser)presented in Figure 3 satisfies Definition A.1.

We present a proof of Theorem 3.1 in Appendix B.

4 ApplicationsWe now describe several applications that use Custodian-Contract Interaction, and present the relevantimplementations for each. Each proposed application employs the main construction we presented in §3(depicted in Figure 3) to implement a specific functionality. We present algorithms of the functionality F =(NextStep,DoContract) required to implement the specific applications.

9Indeed, in private consensus networks the authenticator may be constructed using an SUF-CMA digital signature schemewith some pre-generated keypair (pk , sk) specific to the contract.

Page 11: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

4.1 File Access LoggingSome cryptographic access control systems require participants to actively log file access patterns to a remoteand immutable network location [Fou]. A popular approach to solving this problem in cryptographic accesscontrol systems, leveraged by systems like Hadoop [Fou], is to assign a unique decryption key to each file, andto require that clients individually request each key from an online server, which in turn logs each request.This approach requires a trusted online server that holds decryption keys, and cannot be implemented usinga public consensus network.

In place of a trusted server, we propose to use CCS to implement mandatory logging for protected files.In this application, a local Custodian stores a master key for some collection of files, e.g., a set of files storedon a device.10 The Custodian then employs the public output property of the consensus network to ensurethat prior to each file access the contract must write a public log identifying the file access to the network.To provide confidentiality of file accesses, the Custodian may encrypt the log entry under the public key ofsome auditing party and will only release the decryption once this ciphertext is published.11 We present theNextStep and DoContract algorithms for a simple two-phase version of this proposal in Algorithms 5 and 6.

4.2 Strong Encrypted Backups from Weak PasswordsIn the past several years, device manufacturers have widely deployed end-to-end file encryption for devicessuch as mobile phones [App16, Pro] and cloud backup data [Las,Krs16]. Increasingly, these systems requireusers to hold their own secrets rather than trusting them to the manufacturer.

Implementing end-to-end encryption systems at scale poses a dilemma for system designers. Encryptionsystems require high-entropy cryptographic keys, but users are prone to lose or forget high-entropy passwords.To address this tension, manufacturers have increasingly turned to the use of trusted hardware such ason-device cryptographic co-processors [App16,ARM] and cloud-based HSMs [Krs16] for backup data. In thismodel a user may authenticate using a relatively weak device passcode such as a PIN, and the hardware willthen release a strong encryption key. To prevent passcode guessing attacks, this stateful hardware throttles orlimit the number of login attempts.12

A Custodian-Contract Scheme provides an alternative mechanism for limiting (or throttling) the numberof guessing attempts on password-based encryption systems. A user can employ an inexpensive USB drive oran extremely low-cost embedded chip with no requirement to keep state (e.g., an FPGA) to host a simpleCustodian, with an internal secret key K. The custodian is constructed to release K only when it sees theproper output of a smart contract on the ledger and receives a valid input. The smart contract in turn maylimit the number of guesses an adversary can make within a given period. This Custodian and Contractfunctionalities required to enable this functionality are relatively simple, under the assumption that theContract has access to a clock equivalent.13 For reasons of space we do not formally define the functionalityF required to implement this application; it is very similar to the functionality of §4.1.

We remark that in practice the decryption process in such a system can be fairly time consuming, dueto the need to interact with the consensus network and obtain a number of blocks of valid output priorto running the Custodian. This system may be useful for low frequency applications such as recoveringencrypted backups or emergency password recovery.4.3 Autonomous RansomwareRansomware is increasingly becoming a problem in hospitals and other corporate environments [Zet16].These systems consist of a malware payload that encrypts user files, and an online C&C system that delivers

10If the Custodian is implemented using cryptographic techniques such as FWE, a unique Custodian can be shipped along withthe files themselves. If the user employs a hardware token, the necessary key material can be delivered to the user’s Custodianwhen the files are created or provisioned onto the user’s device.

11The corresponding decryption key may be held offline.12This approach led to the famous showdown between Apple and the FBI in the Spring of 2016. The device in question used a

4-character PIN, and was defeated in a laboratory using a state rewinding attack, and in practice using an estimated $1 millionsoftware vulnerability [Pal15,Wea15].

13In the case of a public consensus network such as Ethereum or Bitcoin, this clock can be implemented by requiring at leastK empty blocks between each transaction.

Page 12: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

Algorithm 5: File Access Logging (NextStep)Data: Input: Inputi (filename),STi−1, Di, Pi−1; ri; Secret: PRF Key K; Constants: pk auditor

Result: Outputi or ⊥Parse STi−1 as (phase, CT, filename)if phase = 2 then

Phase 2: Verify contract responseif Di = ( “Allow" ‖ CT ) then

// Output a decryption key for filenameSTi = (1, ·, ·)output (Outputi‖Pi) = (PRFK(filename)‖ε)

else// Failureoutput ⊥

else// Phase 1: Encrypt the filename to be loggedCT ← PKEnc(pk auditor, Inputi)STi ← (2, CT, Inputi)output (Outputi‖Pi) = (ε‖CT )

Algorithm 6: File Access Logging (DoContract)Data: Input: (Ci, Pi), Constants: RResult: Di

write Pi as public contract outputoutput Di = "Allow"‖Pi

Page 13: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

decryption keys to customers who pay a ransom – typically in a cryptocurrency such as Bitcoin. In currentransomware, the system that deliver keys represents a weak point in the ransomware ecosystem. This deliveryexposes ransomware operators to tracing [Tec16]. As a result, some operators have fled without key material,or have deployed non-functional “dummy” ransomware.

We now consider a potential destructive application of the CCS paradigm: the development of autonomousransomware that guarantees decryption without the need for any online C&C. We refer to this malware asautonomous because, once an infection has occurred, it requires no further interaction with the malwareoperators, who can simply collect payments issued to a Bitcoin (or other cryptocurrency) address.

In this application, the malware portion of the ransomware samples an encryption key K ∈ {0, 1}` andinstalls this value within a Custodian. To activate the ransomware, the user runs the Contract portion of themalware on a payment network such as Bitcoin or Ethereum, passing as input the identity of a valid paymenttransaction to the ransomware operator’s address. The Custodian then validates the result and authenticatorreturned by the Contract – which in this case comprises a chain of computationally expensive proofs of work –and delivers the key K if this proof is valid. Algorithms 7 and 8 present a simple example of the functionality.

We note that the Custodian may be implemented using trusted execution technology that is becomingavailable in commercial devices, e.g., an Intel SGX enclave, or an ARM TrustZone trustlet. Thus, autonomousransomware should be considered a threat today – and should be considered in the threat modeling of trustedexecution systems (see §6 for more discussion). Even if the methods employed for securing these trustedexecution technologies are robust, autonomous ransomware can be realized with software-only cryptographicobfuscation techniques, if such technology becomes practical.14

Algorithm 7: Ransomware (NextStep)Data: Input: Inputi,STi−1, Di, Pi−1; ri, Secret: Key kResult: Outputi or ⊥if Di = "Decrypt" then

output Outputi = k;

elseoutput ⊥;

Algorithm 8: Ransomware (DoContract)Data: Input: (Ci, Ui), Constants: address, amountResult: Di

parse Ui to obtain TransID// Use the following call to verify the paymentif ValidatePayment(TransID, amount, address) = 1 then

output Di = "Decrypt";else

output Di = "Invalid";

4.4 “Fair Encryption” and MPCMulti-party Computation (MPC) [Yao86,GMW87] guarantees that parties 1, . . . , n can run a function fon private inputs x1, . . . xn without revealing the inputs (or any intermediate function) to the other parties.A known limitation of general MPC is the difficulty of obtaining fairness when at least half of the partiesare malicious [Cle86]: intuitively, the malicious parties may abort the protocol after learning the result, but

14See Lewi et al. for a survey on the current practical limitations of this technology [LMA+16].

Page 14: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

before the honest parties have learned it. This can be catastrophic in applications such as financial trading.Efforts to address this problem have led to relaxations in security properties [GK10, Pin03, BLOO11].

Recently Gordon [GK10] et al. and Andrychowicz et al. [ADMM14] proposed a partial solution that usesconsensus networks to penalize users who abort. This approach assumes that the parties can correctly select apenalty that will compensate the honest users for the protocol failure. Unfortunately, choosing such a penaltymay not be feasible in practice: for example, if the utility of malicious behavior is difficult to calculate, or ifthe capital required to participate in the protocol exceeds any party’s resources. In these cases it would bepreferable if the consensus network could actually force the protocol to complete.

Fair encryption. As a building block for solving this problem, we propose fair encryption. Intuitively, thisprimitive ensures that if one authorized decryptor can decrypt a ciphertext, then all authorized decryptorsmay do so as well. We accomplish this by constructing a custodian that will decrypt a CCA2-secure ciphertextif and only if a valid ciphertext has been placed on the public consensus network transcript by a Contract.

Algorithm 9: Fair Encryption (NextStep)Data: Input: Inputi,STi−1, Di, Pi−1; ri, Secret: Key k. Constants: Ciphertext hash HResult: Outputi or ⊥if Inputi = “GenKey” then

(pk , sk)← PKMGen(1λ)STi ← skoutput Outputi = pk

else if Inputi = “Decrypt”‖CT AND STi−1 6= ε thenParse STi−1 as skM ← PKMDec(sk , CT )output M

Algorithm 10: Fair Encryption (DoContract)Data: Input: (Ci, Pi, Ui). Contants: pkmpc, pk0, pk1 . . . pkn−1. Init: Set S ← φ, Hash H ← εResult: Di

write Pi as public contract outputoutput Di = "Decrypt"‖Pi

As a building block, fair encryption requires a multi-key public key encryption system (PKMGen,PKMEnc,PKMDec) with the following properties. Given n public keys pk1, . . . , pkn, the encryption routine produces aciphertext that can be decrypted using any one of the corresponding secret keys. The resulting ciphertextmust be non-malleable, i.e., the (honest) decryption routine must output ⊥ on input any valid secret keyand a mauled version of the ciphertext. Such a scheme can be easily built using common techniques; forcompleteness we include one such construction in Appendix C.

To use a fair encryption scheme, each party configures an honest instance of the custodian. At setup time,each Custodian outputs a public key pk for the encryption scheme above. To produce a ciphertext for allparties, the sender encrypts a single message to all public keys in the set pk1, . . . , pkn.15 On receiving theresulting ciphertext CT , a receiving User must invoke the Contract, which in turn places CT onto the publictranscript produced by the consensus network (subsequent users may simply provide a reference to the firstpublication). When the User obtains the authenticated result, she presents it to her Custodian which verifies

15This scheme requires a non-malleable ciphertext that encrypts to any set of public keys pk1, . . . , pkn and allows decryptionwith any one of the corresponding secret keys. The scheme must also be non-malleable, so that an attacker cannot maul theciphertext so that only a subset of the honest decryptors will be able to decrypt it correctly.

Page 15: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

the integrity of the ciphertext and Contract output before decrypting using sk .16 We present an instantiationof the functionality in Algorithms 9 and 10.

MPC from Fair Encryption. To achieve fairness in an MPC protocol, each party configures an honestCustodian and generates a public key pk . Rather than computing a function f(x1, . . . , xn) directly, theparties collaborate to compute FairEncryptpk1,...,pkn

(f(x1, . . . , xn)). At the conclusion of this protocol, eachparty submits the resulting ciphertext CT to the Contract in order to obtain its decryption. Even if theresulting MPC computation is unfair, a malicious party will be forced to transmit their result to the Contractprior to learning the output of the computation. Because this transmission produces a public record on theconsensus transcript, all parties will necessarily learn the result.

We observe that if the Custodian is implemented using a form of (Extractable) Witness Encryption (WE)as in [Jag15], the protocol is simpler in that there need be only one Custodian and one “public key”: the relationencoding the consensus authenticator verification statement. Even when the Custodian is implemented intrusted hardware (such as Intel SGX enclaves) the resulting construction still does not significantly relax theunderlying security of the MPC protocol. In this case, the Custodian is trusted only with the fairness of theprotocol: a fully corrupted custodian cannot violate either correctness or input privacy.

5 Realizing the Custodian and ContractIn this section we turn to the matter of instantiating the custodian and contract components of a real system.

5.1 Realizing the CustodianTrusted cryptographic co-processors. The simplest approach to implement the custodian is usingsecure hardware or trusted execution environment such as Intel’s SGX [sgx] or ARM Trustzone [ARM]. Whenimplemented using these platforms, our techniques can be used immediately for applications such as logging,fair MPC and (unfortunately) ransomware.

While each of the environments above provides immutable statekeeping, a key feature in our proposalis that our approach does not require a secure means for recording state within the Custodian. (Indeed,historically some access control systems have failed because they admitted state rollback attacks [Sko16].)This may enable the construction of stateful functionalities using relatively inexpensive secure hardware,such as a processor with an embedded decryption key, or even using FPGAs as a building block. Alongthese lines, Nayak et al. [NFR+17] recently showed how to build trusted non-interactive Turing Machinesfrom minimal stateless trusted hardware. Such techniques open the way for the construction of arbitraryCustodian functionalities on relatively inexpensive hardware.

Witness encryption. Witness encryption (WE), first proposed by Garg, Gentry, Sahai, andWaters [GGSW13],allows for the encryption of of data such that it can only be opened if the decryptor knows a witness to thestatement. Witness encryption defined for an NP language L and witness relation R. It allows a user toencrypt a message m under a instance x. Decryption of the resulting ciphertext is possible only if x ∈ L and.It can be built from Indistinguishability Obfuscation [GGH+13].

Witness Encryption seems an obvious candidate for constructing the Custodian, at least for somefunctionalities. The limitation of WE is that this construction only guarantees the ciphertext is secure inthe case where x /∈ L i.e. there is no witness. Of course, when considering proof-of-work consensus systemsthere is always a valid proof of work, it is merely computationally difficult to build it. To address this werequire a stronger notion of extractable witness encryption (EWE) which requires that anyone decrypting theciphertext actually know the witness, not merely that such a witness exist. Goldwasser et al. conjecturethat the original WE construction of Garg et al. may possess this property under stronger cryptographicassumptions [GKP+13b].

16This scheme requires a non-malleable ciphertext that encrypts to any set of public keys pk1, . . . , pkn and allows decryptionwith any one of the corresponding secret keys. The scheme must also be non-malleable, so that an attacker cannot maul theciphertext so that only a subset of the honest decryptors will be able to decrypt it correctly.

Page 16: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

Functional witness encryption (FWE). FWE [BCP13,ABG+13] is a more powerful conjectured primi-tive in which the decryption of a ciphertext is not simply a message m, but f(m,ω) where ω is a witness andf is a function chosen by the encryptor. Using this primitive, we can construct powerful Custodians. However,FWE requires exists under stronger cryptographic assumptions, and the literature contains implausibilityresults [GGHW13] similar to the impossibility results known for Virtual Black Box obfuscation [BGI+01].But in both cases, the impossibility results stem from malicious auxiliary input and “contrived” functions.It remains an open question if these results disqualify the existance of FWE for realistic and practicalfunctions; Virtual black box obfuscation, from which FWE can be built, is still an ongoing active researcharea [BGK+13, BR13].

5.2 Realizing the ContractA number of different systems may be used to instantiate the Contract. In principle, any stateful functionalitycapable of producing SUF-CMA signatures can be used for this purpose, including trusted servers and “privateblockchains” consisting of a small quorum of servers. However in this work we focus primarily on publicconsensus networks, which are primarily implemented using blockchains.17

Security of the contract. An obvious solution for instantiating the Contract is to use a public cryptocur-rency (or smart contract) such as Bitcoin or Ethereum. This avoids the need for any globally trusted thirdparty. The limitation of this approach is that these systems cannot offer cryptographic authentication ofContract outputs. Instead, the guarantee provided is economic in nature: since these networks authenticatelegitimate blocks by attaching computationally-expensive proofs of work, the cost of forgery may be high.The exact details of this cost depend on the application and, crucially, the number of simultaneous instancesof the system sharing a single consensus network. In the following section we briefly address this analysis.

Forging blockchain fragments. Blockchains such as Ethereum and Bitcoin employ hash-based proofs of work toauthenticate blocks. Since these proofs of work are also used for currency “mining”, a simple way to judge thecost of forging a block is to evaluate the financial opportunity cost of spending hashpower on such forgeriesrather than on mining.

As of early 2017, forging a single Bitcoin block represents an opportunity cost of at least $12,500 USD inforgone block reward [coi]. Since a Contract may be authenticated by a chain of such blocks (where each blockimplicitly hashes the previous block), forging a chain of approximately 24 hours worth of block confirmationsrepresents a forgone mining cost of approximately $1.8M USD.

The challenge in this simple analysis is that a single blockchain fragment may be used by multiple instancesof a given Contract. This admits the possibility that an attacker with significant capital might amortize thiscost by spreading it across many instances. Indeed, if amortized over a sufficient number of forged contracts,this fixed cost could be negligible. For scenarios where we expect sufficient instances for this attack to bepractical, it is necessary to rate limit the number of contracts included in a given block that the custodian willaccept results from.For example, to lower-bound the cost of such an amortization attack to $1000 USD perinstance, it is necessary to limit the number of transactions to 1800 such transactions per day. This may besufficient protection for applications such as ransomware, where the ransom itself can be priced significantlylower. In the full version of this work we include a more complete economic analysis.

6 Prototype Implementation

To validate our approach we implemented a Custodian-Contract scheme using Intel SGX as a custodianand the Ethereum smart contract system as our ledger. Due to its ubiquity in newer Intel processors, SGXrepresents a secure computation technology that is both practical and available today. Realizing the Custodianeffectively in SGX provides a means for this paradigm to be quickly adopted.

17For more background on this technology, see e.g., [TS15, BMC+15].

Page 17: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

Intel SGX Enclave

User Computer

Ethereum Smart Contract

Main Ethereum blockchain

maintains state

Figure 5: Our prototype implementation of the Custodian-Contract Interaction

To implement the Custodian, we adapted the verification and parsing subset of the C++ Ethereum client [cpp]to work within an SGX enclave running on Linux. Due to limitations placed on code that runs within anSGX enclave, this adaptation was non-trivial. It required the removal of all input/output and downgradingthe C++11 client to use the adapted C++03 standard library supplied by the SGX software developmentkit. This downgrade process was replicated for the subset of Boost on which the Ethereum client relies. TheEthereum client also uses cryptopp to perform SHA3 computations; we chose to replace cryptopp with asimpler but less efficient standalone implementation of SHA3. Because SGX does not natively support publickey cryptography, we also integrated the standalone TweetNacl library [Twe]. Finally, SGX does not supportcopying C++ objects between userland and a secure enclave, so all input data had to be serialized into bytestreams. We built a thin wrapper around the serialization format supported by Ethereum, called RecursiveLength Prefix.

We implemented three SGX enclaves: (1) a generic blockchain verifier, (2) a sample ransomware enclave,and (3) a guaranteed logging enclave. We made a number of application-specific optimizations for eachenclave that allowed us to omit parts of the protocol in §3. Each enclave was hardcoded a genisis block asground truth. Finally, we included the capability for the enclave to verify the blockchain fragments passed inby a user against a “genesis block” used as a trusted ground truth. We also provide a trusted construct callto allow the party configuring the enclave to set a more recent block as ground truth. In the ransomwareapplication, having the ability to set ground truth at the time of infections allows for faster blockchainverification and as protects against amortization attacks described in §5.2.

Ethereum supports a fast-sync protocol which allows new nodes to quickly download the blockchain bychoosing to check the proof of work only on random blocks. We borrowed this protocol for our enclaves andenforced that when a single block fails the proof of work verification, the enclave will reject the fragment.Because the cost of forging a transaction is proportional to the number of blocks that follow it and theirdifficulties, it is also not particularly important to verify each block preceding the block with the contractouput. Thus, our enclave simply verifies the proof of work on every block following that block and ensuresthat the block difficulty is sufficiently high. 18

Measurements and Test Cases While the Bitcoin proof of work can be verified using only two SHA2565operations per block, verifying Ethereum’s proof of work is an extremely computationally expensive operationthat necessitates many SHA3 calls. Modern Ethereum clients alleviate this problem by constructing largeRAM caches to amortize this computation cost over many blocks in the chain. Because our implementation isa prototype that is constrained by SGX’s 128MB memory limit, we did not to optimize our implementationin this way. This leaves our prototype slower than normal Ethereum clients. Figure 6 shows the runningtimes of various parts of our implementation. In addition to the timing results we gathered from running ourenclaves, we also implemented and benchmarked SHA256 in SGX to estimate the block verification time acustodian would take if we had instead used Bitcoin.

18The difficulty is a variable field that determines the complexity of the proof of work required for each block. Since thisfield can change we require our blockchain verifier to adjust the number of blocks required for a proof of work, to avoid forgedfragments that claim an artificially low difficulty.

Page 18: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

Computation Section Running Time PercentageVerification (Prototype SHA) 18.35s 100%

SHA3-512 18s 98.8%Hash Memory Managment 14.62ms 0.0%

Other Computation .22s 1.1%Verification (Cryptopp) 2.27s 100%

SHA3-512 2.14s 94.2%Hash Memory Managment 7.5ms 0.0%

Other Computation .13s 5.8%SGX Overhead 1.87s 100%

Enclave Setup 1.80s 96.3%Deserialization .07s 3.7%

Verification (Bitcoin) 1.6µs 100%

Figure 6: Table showing the computation times of custodian code. Verification times shown are for a singleblock

We emphasize that our implementation is a prototype; the difficulties of engineering under the limitationsof the current SGX system led to design choices which are not optimal.

7 Related Work

Intel’s SGX. Intel’s Software Guard Extensions (SGX) allow for the creation of enclaves, a trusted executionenvironment [sgx,MAB+13]. These enclaves can be statically disassembled, but are opaque while running.Enclaves can also attest to their current state, proving correct execution [JSR+16,AGJS13]. SGX cancryptographically seal data to be used across multiple invocations – corresponding to the encrypted stateused in our protocol. Data can either be sealed to the owner of the enclave or just to the specific enclaveinstance [AGJS13,B16]. Production enclaves can only be produced by obtaining a production license andsigning key from Intel, which whitelists licensee’s key [Rao16, Int16].

Ethereum and smart contracts. Ethereum is a modern blockchain technology, built on a P2P network, thatsupports both traditional payment applications as well as distributed consensus computing. Transactions canbe direct at users or at smart contracts, in which case miners will run the functionality within the contract.For the output to be saved, each miner runs the function until its termination, consuming gas for eachinstruction executed. If all miners reach a consensus on the function output, the output and state is saved.In any other situation, including insufficient gas, the miners terminate and return the contract to its previousstate. Ethereum and other smart contract systems contain many more features which we do not describe, butcan be found in [Ethb].

Other applications of smart contracts. Smart contracts are small programs executed autonomously by theblockchain. Examples include Town Crier [ZCC+16] which leverages Intel’s SGX to provide an authenticateddata feed which can bridge the gap between already existing websites and smart contracts. In [TZL+17],Tramer et al. create the notion of a sealed-glass proof, which leverages Intel’s SGX to provide confidentialityand integrity assurances even in the face of side-channel leakage. In a more nefarious setting, Juels et al.introduce the concept of a criminal smart contract which could help enable the (anonymous) commission of awider range of criminal acts [JKS16].

Ransomware. Ransomware is a class of malware that encrypts data on a victim’s system, offering decryptionin exchange for payment of a monetary ransom. First proposed by Young et al. in 1996 [YY96], as of 2016ransomware infections have produced an estimated $1 billion in direct and indirect costs [FG16]. The idea ofmalware and trusted hardware is not a new one. In 2011 Dunn et al. [DHWW11] proposed the idea of hidingmalware using a trusted platform module (TPM).

Page 19: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

Distributed logging. Secure, distributed logging is an important area of systems research that has been activefor a long time [SK98, PPW13,VWGP12,GMMP12]. Of particular note to our work is the commericalproduct Guardtime [Gua] based on Keyless Signatures’ Infrastructure [BKL13]. This project uses ledgers torecord events and is able to similarly record log statements.

References[ABG+13] Prabhanjan Ananth, Dan Boneh, Sanjam Garg, Amit Sahai, and Mark Zhandry. Differing-

inputs obfuscation and applications. Cryptology ePrint Archive, Report 2013/689, 2013.http://eprint.iacr.org/2013/689.

[ADMM14] Marcin Andrychowicz, Stefan Dziembowski, Daniel Malinowski, and Lukasz Mazurek. Securemultiparty computations on bitcoin. In Security and Privacy (SP), 2014 IEEE Symposium on,pages 443–458. IEEE, 2014.

[AGJS13] Ittai Anati, Shay Gueron, Simon Johnson, and Vincent Scarlata. Innovative technology forCPU based Attestation and Sealing. In Proceedings of the 2nd International Workshop onHardware and Architectural Support for Security and Privacy, volume 13, 2013.

[App] Apple Computer. Answers to your questions about Apple and security. Available at http://www.apple.com/customer-letter/answers/.

[App16] Apple Computer. iOS Security: iOS 9.3 or later. Available at https://www.apple.com/business/docs/iOS_Security_Guide.pdf, May 2016.

[ARM] ARM Consortium. ARM Trustzone. Available at https://www.arm.com/products/security-on-arm/trustzone.

[B16] Alexander B. Introduction to Intel SGX Sealing. Available at https://software.intel.com/en-us/blogs/2016/05/04/introduction-to-intel-sgx-sealing, 2016.

[BCD+14] Adam Back, Matt Corallo, Luke Dashjr, Mark Friedenbach, Gregory Maxwell, Andrew Miller,Andrew Poelstra, Jorge Timón, and Pieter Wuille. Enabling blockchain innovation with peggedsidechains. Available at https://blockstream.com/sidechains.pdf, October 2014.

[BCP13] Elette Boyle, Kai-Min Chung, and Rafael Pass. On extractability (a.k.a. differing-inputs)obfuscation. Cryptology ePrint Archive, Report 2013/650, 2013. http://eprint.iacr.org/2013/650.

[BCP14] Elette Boyle, Kai-Min Chung, and Rafael Pass. On extractability obfuscation. In TCC ’11,pages 52–73, Berlin, Heidelberg, 2014. Springer Berlin Heidelberg.

[BGI+01] Boaz Barak, Oded Goldreich, Russell Impagliazzo, Steven Rudich, Amit Sahai, Salil Vadhan,and Ke Yang. On the (im)possibility of obfuscating programs. Cryptology ePrint Archive,Report 2001/069, 2001. http://eprint.iacr.org/2001/069.

[BGK+13] Boaz Barak, Sanjam Garg, Yael Tauman Kalai, Omer Paneth, and Amit Sahai. Protectingobfuscation against algebraic attacks. Cryptology ePrint Archive, Report 2013/631, 2013.http://eprint.iacr.org/2013/631.

[BKL13] Ahto Buldas, Andres Kroonmaa, and Risto Laanoja. Keyless Signatures’ Infrastructure: Howto Build Global Distributed Hash-Trees, pages 313–320. Springer Berlin Heidelberg, Berlin,Heidelberg, 2013.

Page 20: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

[Bla93] Matt Blaze. A cryptographic file system for UNIX. In Proceedings of the 1st ACM Conferenceon Computer and Communications Security, CCS ’93, pages 9–16, New York, NY, USA, 1993.ACM.

[Bla94] Matt Blaze. Key management in an encrypting file system. In Proceedings of the USENIXSummer 1994 Technical Conference on USENIX Summer 1994 Technical Conference - Volume1, USTC’94, Berkeley, CA, USA, 1994. USENIX Association.

[BLOO11] Amos Beimel, Yehuda Lindell, Eran Omri, and Ilan Orlov. 1/p-secure multiparty computationwithout honest majority and the best of both worlds. In Annual Cryptology Conference, pages277–296. Springer, 2011.

[BMC+15] Joseph Bonneau, Andrew Miller, Jeremy Clark, Arvind Narayanan, Joshua A. Kroll, andEdward W. Felten. SoK: Research perspectives and challenges for bitcoin and cryptocurrencies.In 2015 IEEE Symposium on Security and Privacy, pages 104–121, San Jose, CA, USA, may2015. IEEE Computer Society Press.

[Bon12] Joseph Bonneau. The science of guessing: Analyzing an anonymized corpus of 70 millionpasswords. In IEEE S&P (Oakland) ’12, SP ’12, pages 538–552, Washington, DC, USA, 2012.IEEE Computer Society.

[BP15] Alex Biryukov and Ivan Pustogarov. Proof-of-Work as Anonymous Micropayment: Rewardinga Tor Relay, pages 445–455. Springer Berlin Heidelberg, Berlin, Heidelberg, 2015.

[BR13] Zvika Brakerski and Guy N. Rothblum. Virtual black-box obfuscation for all circuits via genericgraded encoding. Cryptology ePrint Archive, Report 2013/563, 2013. http://eprint.iacr.org/2013/563.

[BR14] Zvika Brakerski and Guy N. Rothblum. Virtual black-box obfuscation for all circuits via genericgraded encoding. In TCC ’14, pages 1–25. Springer, 2014.

[CHK04] Ran Canetti, Shai Halevi, and Jonathan Katz. Chosen-ciphertext security from identity-basedencryption. In EUROCRYPT, pages 207–222, 2004.

[Cim15] Catalin Cimpanu. Epic fail: Power worm ransomware accidentally destroys victim’sdata during encryption. Available at http://news.softpedia.com/news/epic-fail-power-worm-ransomware-accidentally-destroys-victim-s-data-during-encryption-495833.shtml, 2015.

[Cle86] Richard Cleve. Limits on the security of coin flips when half the processors are faulty. InProceedings of the eighteenth annual ACM symposium on Theory of computing, pages 364–369.ACM, 1986.

[coi] coinmarketcap.com. https://coinmarketcap.com/.

[cpp] cpp-ethereum. cpp-ethereum. Available at https://github.com/ethereum/cpp-ethereum.

[CS04] Ronald Cramer and Victor Shoup. Design and analysis of practical public-key encryptionschemes secure against adaptive chosen ciphertext attack. SIAM J. Comput., 33(1):167–226,January 2004.

[DHWW11] Alan M Dunn, Owen S Hofmann, Brent Waters, and Emmett Witchel. Cloaking malware withthe trusted platform module. In USENIX Security Symposium, 2011.

[DMMQN11] N. Döttling, T Mie, J. Müller-Quade, and T. Nilges. Basing obfuscation on simple tamper-proofhardware assumptions. In TCC ’11. Springer, 2011.

Page 21: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

[etha] The Ethereum Project. https://www.ethereum.org/.

[Ethb] Ethereum White Paper. Ethereum white paper. Available at https://github.com/ethereum/wiki/wiki/White-Paper.

[FG16] David Fitzpatrick and Drew Griffin. Cyber-extortion losses skyrocket, says FBI. Available athttp://money.cnn.com/2016/04/15/technology/ransomware-cyber-security/, 2016.

[Fou] Apache Foundation. Hadoop Key Management Server (KMS) - Documentation Sets. Availableat https://hadoop.apache.org/docs/stable/hadoop-kms/index.html.

[GGH+13] Sanjam Garg, Craig Gentry, Shai Halevi, Mariana Raykova, Amit Sahai, and Brent Waters.Candidate indistinguishability obfuscation and functional encryption for all circuits. CryptologyePrint Archive, Report 2013/451, 2013. http://eprint.iacr.org/2013/451.

[GGHW13] Sanjam Garg, Craig Gentry, Shai Halevi, and Daniel Wichs. On the implausibility of differing-inputs obfuscation and extractable witness encryption with auxiliary input. Cryptology ePrintArchive, Report 2013/860, 2013. http://eprint.iacr.org/2013/860.

[GGM13] Christina Garman, Matthew Green, and Ian Miers. Decentralized anonymous credentials. InNDSS ’13, volume 2013, 2013.

[GGSW13] Sanjam Garg, Craig Gentry, Amit Sahai, and Brent Waters. Witness encryption and itsapplications. Cryptology ePrint Archive, Report 2013/258, 2013. http://eprint.iacr.org/2013/258.

[GK10] S Dov Gordon and Jonathan Katz. Partial fairness in secure two-party computation. In AnnualInternational Conference on the Theory and Applications of Cryptographic Techniques, pages157–176. Springer, 2010.

[GKP+13a] Shafi Goldwasser, Yael Kalai, Raluca Ada Popa, Vinod Vaikuntanathan, , and Nickolai Zeldovich.How to run turing machines on encrypted data. Cryptology ePrint Archive, Report 2013/229,2013. http://eprint.iacr.org/2013/229.

[GKP+13b] Shafi Goldwasser, Yael Tauman Kalai, Raluca Ada Popa, Vinod Vaikuntanathan, and NickolaiZeldovich. How to Run Turing Machines on Encrypted Data, pages 536–553. Springer BerlinHeidelberg, Berlin, Heidelberg, 2013.

[GMMP12] Michael R Grimaila, Justin Myers, Robert F Mills, and Gilbert Peterson. Design and analysisof a dynamically configured log-based distributed security event detection methodology. TheJournal of Defense Modeling and Simulation, 9(3):219–241, 2012.

[GMW87] O. Goldreich, S. Micali, and A. Wigderson. How to play any mental game. In Proceedings ofthe Nineteenth Annual ACM Symposium on Theory of Computing, STOC ’87, pages 218–229,New York, NY, USA, 1987. ACM.

[Goo13] Dan Goodin. You’re infected—if you want to see your data again, pay us $300 in bit-coins. https://arstechnica.com/security/2013/10/youre-infected-if-you-want-to-see-your-data-again-pay-us-300-in-bitcoins/, 2013.

[GRFJ14] Mainak Ghosh, Miles Richardson, Bryan Ford, and Rob Jansen. A TorPath to TorCoin:Proof-of-Bandwidth Altcoins for Compensating Relays. In 7th Workshop on Hot Topics inPrivacy Enhancing Technologies (HotPETs 2014), Amsterdam, Netherlands, July 2014.

[Gua] Guardtime. Guardtime. Available at https://guardtime.com/.

[Int16] Intel Corporation. Product Licensing FAQ. Available at https://software.intel.com/en-us/sgx/product-license-faq, 2016.

Page 22: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

[Jag15] Tibor Jager. How to build time-lock encryption. Cryptology ePrint Archive, Report 2015/478,2015. http://eprint.iacr.org/2015/478.

[JKS16] Ari Juels, Ahmed Kosba, and Elaine Shi. The ring of gyges: Investigating the future of criminalsmart contracts. In Proceedings of the 2016 ACM SIGSAC Conference on Computer andCommunications Security, CCS ’16, pages 283–295, New York, NY, USA, 2016. ACM.

[JMH15] Florian Jacob, Jens Mittag, and Hannes Hartenstein. A Security Analysis of the Emerging P2P-Based Personal Cloud Platform MaidSafe. In TrustCom/BigDataSE/ISPA, pages 1403–1410.IEEE, 2015.

[JSR+16] Simon Johnson, Vinnie Scarlata, Carlos Rozas, Ernie Brickell, and Frank Mckeen. Intel SoftwareGuard Extensions: EPID Provisioning and Attestation Services, 2016.

[Kil06] Eike Kiltz. Chosen-ciphertext security from tag-based encryption. In Shai Halevi and TalRabin, editors, TCC ’06, volume 3876, pages 581–600. Springer, 2006.

[KRS+03] Mahesh Kallahalla, Erik Riedel, Ram Swaminathan, Qian Wang, and Kevin Fu. Plutus:Scalable secure file sharing on untrusted storage. In Proceedings of the Second USENIXConference on File and Storage Technologies, FAST ’03, pages 29–42, Berkeley, CA, USA, 2003.USENIX Association.

[Krs16] Ivan Krstić. Behind the Scenes with iOS Security. In BlackHat. Available at https://www.blackhat.com/docs/us-16/materials/us-16-Krstic.pdf, August 2016.

[Las] LastPass. How is LastPass secure and how does it encrypt/decrypt my data safely? Availableat https://lastpass.com/support.php?cmd=showfaq&id=6926.

[LMA+16] Kevin Lewi, Alex J. Malozemoff, Daniel Apon, Brent Carmer, Adam Foltzer, Daniel Wagner,David W. Archer, Dan Boneh, Jonathan Katz, and Mariana Raykova. 5gen: A framework forprototyping applications using multilinear maps and matrix branching programs. In Proceedingsof the 2016 ACM SIGSAC Conference on Computer and Communications Security, CCS ’16,pages 981–992, New York, NY, USA, 2016. ACM.

[MAB+13] Frank McKeen, Ilya Alexandrovich, Alex Berenzon, Carlos V Rozas, Hisham Shafi, VedvyasShanbhogue, and Uday R Savagaonkar. Innovative instructions and software model for isolatedexecution. In HASP@ ISCA, page 10, 2013.

[MRY04] Philip MacKenzie, Michael K. Reiter, and Ke Yang. Alternatives to non-malleability: Definitions,constructions, and applications. In Moni Naor, editor, TCC ’04, volume 2951, pages 171–190.Springer, 2004.

[Nak12] S. Nakamoto. Bitcoin: A peer-to-peer electronic cash system, 2009. 2012.

[nam16] Namecoin. https://namecoin.org/, November 2016.

[NFR+17] Kartik Nayak, Christopher W. Fletcher, Ling Ren, Nishanth Chandran, Satya Lokam, ElaineShi, and Vipul Goyal. HOP: hardware makes obfuscation practical. In NDSS ’17, 2017.

[Pal15] Damian Paletta. FBI Chief Punches Back on Encryption. Wall Street Journal, July 2015.

[Pin03] Benny Pinkas. Fair secure two-party computation. In International Conference on the Theoryand Applications of Cryptographic Techniques, pages 87–105. Springer, 2003.

[PPW13] Tobias Pulls, Roel Peeters, and Karel Wouters. Distributed privacy-preserving transparencylogging. In Proceedings of the 12th ACM Workshop on Workshop on Privacy in the ElectronicSociety, WPES ’13, pages 83–94, New York, NY, USA, 2013. ACM.

Page 23: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

[Pro] Android Project. Full-Disk Encryption. Available at https://source.android.com/security/encryption/full-disk.html.

[PRZB11] Raluca Ada Popa, Catherine M. S. Redfield, Nickolai Zeldovich, and Hari Balakrishnan.CryptDB: Protecting Confidentiality with Encrypted Query Processing. In Proceedings of theTwenty-Third ACM Symposium on Operating Systems Principles, SOSP ’11, pages 85–100,New York, NY, USA, 2011. ACM.

[Rao16] Dinesh Rao. Intel SGX Product Licensing. Available at https://software.intel.com/en-us/articles/intel-sgx-product-licensing, 2016.

[Rog02] Phillip Rogaway. Authenticated encryption with associated data. In CCS ’02. ACM Press,2002.

[sgx] Intel Software Guard Extensions (Intel SGX). https://software.intel.com/en-us/sgx.

[Sin16] Denis Sinegubko. Website Ransomware - CBT-Locker Goes Blockchain. Sucuri Blog.Available at https://blog.sucuri.net/2016/04/website-ransomware-ctb-locker-goes-blockchain.html, April 2016.

[SK98] Bruce Schneier and John Kelsey. Cryptographic support for secure logs on untrusted machines.In Proceedings of the 7th Conference on USENIX Security Symposium - Volume 7, SSYM’98,pages 4–4, Berkeley, CA, USA, 1998. USENIX Association.

[Sko16] Sergei Skorobogatov. The bumpy road towards iPhone 5c NAND mirroring. CoRR,abs/1609.04327, 2016.

[Tec16] Technology.org. Ransomware authors arrest cases. Available at http://www.technology.org/2016/11/21/ransomware-authors-arrest-cases/, November 2016.

[The] The TrueCrypt Project. Truecrypt. Available at http://truecrypt.sourceforge.net/.

[Tre16a] TrendMicro. Kansas hospital hit by ransomware, extorted twice. Available athttp://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/kansas-hospital-hit-by-ransomware-extorted-twice, 2016.

[Tre16b] TrendMicro. Ransomware Update: UltraCrypter Not Giving Decrypt Keys After Payment,Jigsaw Changes UI Again. Available at http://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/ultracrypter-ransomware-no-decrypt-keys-jigsaw-changes-ui, 2016.

[TS15] Florian Tschorsch and Björn Scheuermann. Bitcoin and beyond: A technical survey ondecentralized digital currencies. Cryptology ePrint Archive, Report 2015/464, 2015. http://eprint.iacr.org/2015/464.

[Twe] TweetNaCl. Tweetnacl. Available at https://tweetnacl.cr.yp.to.

[TZL+17] Florian Tramer, Fan Zhang, Huang Lin, Jean-Pierre Hubaux, Ari Juels, and Elaine Shi. Sealed-glass proofs: Using transparent enclaves to prove and sell knowledge. In IEEE Euro S&P,2017.

[USB+15] Blase Ur, Sean M. Segreti, Lujo Bauer, Nicolas Christin, Lorrie Faith Cranor, Saranga Koman-duri, Darya Kurilova, Michelle L. Mazurek, William Melicher, and Richard Shay. Measuringreal-world accuracies and biases in modeling password guessability. In 24th USENIX Secu-rity Symposium (USENIX Security 15), pages 463–481, Washington, D.C., 2015. USENIXAssociation.

Page 24: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

[VWGP12] Jo Vliegen, Karel Wouters, Christian Grahn, and Tobias Pulls. Hardware Strengthening aDistributed Logging Scheme. In Digital System Design. IEEE, 2012.

[Wea15] Nicholas Weaver. iPhones, the FBI, and Going Dark. Lawfare Blog, August 2015.

[Yao86] Andrew Chi-Chih Yao. How to generate and exchange secrets. In Proceedings of the 27th AnnualSymposium on Foundations of Computer Science, SFCS ’86, pages 162–167, Washington, DC,USA, 1986. IEEE Computer Society.

[YY96] Adam Young and Moti Yung. Cryptovirology: Extortion-based security threats and coun-termeasures. In Security and Privacy, 1996. Proceedings., 1996 IEEE Symposium on, pages129–140. IEEE, 1996.

[ZCC+16] Fan Zhang, Ethan Cecchetti, Kyle Croman, Ari Juels, and Elaine Shi. Town crier: Anauthenticated data feed for smart contracts. In Proceedings of the 2016 ACM SIGSACConference on Computer and Communications Security, CCS ’16, pages 270–282, New York,NY, USA, 2016. ACM.

[Zet16] Kim Zetter. Why hospitals are the perfect targets for ransomware. Available at https://www.wired.com/2016/03/ransomware-why-hospitals-are-the-perfect-targets/, 2016.

A Security and Correctness definitions for Custodian-Contract In-teraction

We now present a formal simulation-based definition of security for a custodian-contract interaction protocol.

The Real experiment. The real-world experiment is parameterized on (Σ,F ,U , q, qforge). Let U be an adversaryplaying the role of the user. First, run SetupF to create the secret S. Next, initialize z0 ← ε, STenci−1 ← εand TPub ← ε. For i = 1 to q, the adversary U may issue the following queries:

Query to the contract functionality. When U submits (Ci, Pi−1, Ui), run ExecuteContractF (Ci, Ui, zi−1).Append the public output to TPub, record the new state zi, and return all outputs to U .

Query to the custodian functionality. When U submits (STenci, Inputi, ri, [πi, Oi]), run ExecuteCustodianFon the provided inputs and secret S. Return the output to U .

Query the forgery oracle. U may query the forgery oracle F up to qforge times through the protocol.When U querys F on [Cf‖Cf−1‖Df‖Pf ], generate a validating authenticator πf .

Let Uq be the adversary’s output following the qth oracle query. Let TPub be the transcript of all outputsproduced by the contract. The output of the Real experiment is defined as (Uq‖TPub).

The Ideal experiment. This experiment is parameterized on (F ,S, q). Let S be an ideal-world adversarythat conducts the following experiment. First, run SetupF to obtain S. Set STi ← ε. For i = 1 to q, theideal-world adversary S may query an oracle OF,S,F . When S submits Inputi, Ui to this oracle, the oracleperforms the following steps:

1. Compute Di+1 ← DoContract(Pi−1, Ui, zi) and append the public output to TPub.

2. Uniformly sample r ∈ {0, 1}`.

3. Compute (Outputi‖Pi,STi)← NextStep(S, i, Inputi,STi−1, Di, r)).

Page 25: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

Let Sq be the ideal-world adversary’s output following the qth oracle query. The output of the Idealexperiment is defined as (Sq‖TPub), where Pq is the public output of the contract. We are now ready toformalize the definition of simulation security for a Custodian-Contract scheme. Our definition of securityrequires that for all p.p.t. real-world adversaries U there must exist an ideal-world adversary SOF,S,F (·) suchthat the distribution of outputs produced U at the conclusion of the real-world experiment is indistinguishablefrom that of the ideal-world adversary S at the conclusion of the ideal-world experiment.

A.1 Third Party PrivacyOur main security definition (Definition A.1) considers the integrity of CCS computations in the face of anadversarial User. However, a useful Custodian-Contract scheme must also address the privacy of a honestUser against curious third parties, who may eavesdrop on (public) consensus transactions in order to recovernon-public User inputs or outputs. We address this concern via a separate security experiment that modelsan honest Custodian, an honest User and an honest-but-curious Contract who acts as the observer, and mayread any protocol messages sent to or from the Contract. Intuitively we wish to ensure that for each of ourprotocols, the curious Contract cannot learn the input Inputi. We address this property via a real/idealworld definition. In the real-world experiment, the adversary A is provided with an oracle that allows herto request the User run the computation on a given set of inputs. A receives each input and output fromthe ExecuteContract call that results from this computation. In the ideal-world experiment, we require theexistence of a simulator S that, on input a valid query from A (i.e., a query that would otherwise produce avalid output from ExecuteUser), makes the necessary query to ExecuteContract. The key in this definition isthat S does receive A’s input (or any previous output). Our definition requires that no p.p.t A can distinguishthe two experiments.

The third-party Privacy Real experiment (RealTPP). The experiment proceeds as follows. For i = 1 to q,the adversary A may query an oracle on tuples of the form (Inputi, Ui, C

′). In response the oracle samplesrandom coins ri and returns Ci ← ExecuteUserF (Inputi, Ci−1, ri). The oracle then completes the protocol of§2.1 by running ExecuteContract and ExecuteCustodian on the resulting outputs, and finally provides A withall input/output values sent to the Custodian and the Contract:

(Ci, Ui, Pi−1, zi−1, Oi, πi,STenci,STenci−1,Outputi, Pi)

The third-party Privacy Ideal experiment (IdealTPP). This experiment is parameterized by a probabilistic

simulator algorithm SF that does not have access to the computing secrets S, s. It operates identically to theReal experiment except that in addition to running the protocol as described above, the oracle also runs SFwith no user inputs to produce an output C ′i, then calls a second instance of the Contract oracle to produce(Oi, πi). The oracle provides A with the tuple:

(C ′i, Ui, Pi−1, zi−1, Oi, πi,STenci,STenci−1,Outputi, Pi)

In both cases the adversary A outputs a bit b.

Definition A.1 (Third party privacy) We say that a Custodian-Contract scheme Σccs is third partyprivate for functionality F if ∀ p.p.t. real-world adversarial users A and ∀ non-negative integers q, qforge,there exists a p.p.t. simulator SF such that Pr [RealTPP(Σccs,F ,A, q) ]−Pr

[IdealTPP(F ,SF , q)

]≤ ν(λ).

Remark. We note that this experiment provides privacy for the user inputs because in the ideal-worldexperiment, the simulator S does not receive the User input Inputi or random coins specified by the User.

B Proof of Theorem 3.1We now present a proof of Theorem 3.1. We briefly recall our two experiments, Real and Ideal fromAppendix A. In Real, the protocol is run as described in Section 2.2, with A allowed to make up to qforge

Page 26: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

queries to the forgery oracle F . This oracle takes as input Of = [Cf‖Cf−1‖Df‖Pf ] from the adversaryand outputs the authenticator πf (note that we assume it is impossible for AF to forge an authenticatoridependantly). In the Ideal experiment there exists an ideal oracle that runs the program functionality andmaintains its own state. Additionally, in the ideal experiment we give S access to up to qforge calls to SingleStep Oracles. After each run of the real ideal functionality, the ideal oracle outputs an encrypted checkpointthat can only be used by the single step oracles. These oracles can run exactly one step of the functionNextStep from a supplied start point and then can never run anything again.

We prove that there exists a simulator S such that no p.p.t. adversary AF can distinguish the distributionof outputs from the Real and Ideal experiment with non-negligible probability. To prove this statement, wefirst describe the operation of the simulator S. We then proceed to demonstrate that if there exists a p.p.tAF that distinguishes the output of the experiments, then one or more of the following are true: (1) thereexists an adversary that breaks the EU-CMA security of the authenticator, (2) an attack on the bindingproperty of the commitment scheme, or (3) a distinguisher for the pseudorandom function PRF. We will nowdescribe the operation of the simulator S.

The operation of S. The simulator S interacts with the real-world adversary AF using the interfacedescribed in the real-world experiment. The simulator also has oracle access to the ideal functionality (whichcannot be rewound) as well as an authenticator oracle Π that will produce a valid authenticator π for chosenmessages. Finally, the simulator has oracle access to a forgery oracle F and qforge Single Step Oracles, asdescribed above. S maintains three tables, Tcon, Tcus, and Tforge. Each query AF makes to the variousoracles is stored in the appropriate table. Queries to OContract are stored in Tcon, queries to OCustodian arestored in Tcus, and queries to F are stored in Tforge. For example the ith row of Tcon contains

(Ci, Uiπi, Oi)

Similarly, Tcus stores all the queries and responses A makes and receives to OCustodian. The ith row contains:

(STenci, Inputi, ri, πi, Oi,Outputi, Pi,STenci+1)

and Tforge contains the following in the ith row:

(πi, Oi)

All tables are initialized to be empty. S receives and processes queries from AF as follows. When A attemptsto the query the contract oracle on Ci, Ui, the simulator does the following:

1. S calls ODoContract to generate Di

2. S finds Ci−1 in Tcon and generates Oi = [Ci‖Ci−1‖Di‖Pi]. S then queries the signing oracle Π on Oiand receives πi. S adds the entry (Ci, πi, Oi) to Tcon.

3. S returns (πi, Oi) to A.

When A attempts to query the custodian oracle on (STenci, Inputi, ri, πi, Oi), the simulator does the following:

1. S checks Tcus for any previous entries matching the query. If there are any, S returns (STenci+1,Outputi)from that table entry.

2. S checks if the πi is in Tforge. If it is, S runs (STenci+1,Outputi) ← SingleStep(STenci, Inputi). Shalts and outputs (STenci+1,Outputi).

3. S looks up πi, Oi in the ith row of Tcon and makes sure they match some entry. Otherwise, S halts andreturns ⊥.

4. S parses Oi = [Ci‖Ci−1‖Di‖Pi] and checks that Ci = Commit(Inputi, Ci−1; ri). Otherwise, S halts andreturns ⊥.

5. If STenci = ε, S checks that Ci−1 = ε. If it does not, S halts and returns ⊥.6. S checks that STenci is in row i− 1 of Tcus. Otherwise, S halts and returns ⊥7. S calls ODoWork on Inputi and receives Outputi.

Page 27: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

8. S samples STenci+1$← {0, 1}` and appends (STenci, Inputi, ri, Oi,Outputi,STenci+1) to Tcon.

9. S returns Outputi,STenci+1 to A.

When A attempts to query F on (Cf , Cf−1, Df‖Pf ), the simulator does the following:

1. S simulates the forgery oracle F and generates πf2. S appends (πf , Of ) to Tforge

Let A be an adversarial user that succeeds in distinguishing the output of the Ideal and Real experiments.We now show that such an adversary violates one of our assumptions above. The proof proceeds via a seriesof hybrids, where the first hybrid (Game 0) is identically distributed to the Real experiment, and the finalhybrid represents the Ideal experiment instantiated with the simulator S described above. For notationalconvenience, let Adv [Game i ] be A’s advantage in distinguishing the output of Game i from Game 0,i.e., the Real distribution.

Game 0. In this hybrid, S responds to queries as in the Real experiment.

Game 1. In this hybrid, we modify S to abort and output Eventforge in the event that A queries OCustodian

on input (STenci, Inputi, ri, [πi, Oi]) for any valid authentication tag (πi, Oi) that was not supplied bya previous oracle call to S. We note that if A succeeds in producing this event with non-negligibleprobability, then we obtain a trivial attack on the SUF-AUTH of the contract authenticator. Since byassumption the probability of such an event is negligible, we bound Adv [Game 1 ] ≤ ν1(λ).

Game 2. In this hybrid we modify the operation of S such that it aborts and outputs Eventcollision inthe event that A queries OCustodian twice with commitment inputs Ci = Cj and two valid distinctcommitment openings (Inputi, Ci, ri) 6= (Inputj , Cj , rj). By Lemma B.1 we prove that this event occurswith at most negligible probability, hence Adv [Game 2 ]−Adv [Game 1 ] ≤ ν2(λ).

Game 3. In this hybrid we modify S to generate each state encryption key ki by sampling uniformly atrandom from {0, 1}`. Recall that in the preceding hybrid, each ki is generated as PRFS(“enc”‖Ci).Note as well that if the Simulator has not aborted, then each Ci is unique. Thus if there exists anattacker that distinguishes Game 3 from Game 2 with non-negligible advantage, then we can triviallyconstruct a distinguisher that distinguishes the output of PRFS (evaluated on unique inputs) fromthat of a random function. Thus under the assumption that PRF is pseudorandom we have thatAdv [Game 3−Game 2 ] ≤ ν3(λ).

Game 4. In this hybrid we modify the operation of S such that it aborts and outputs Eventauth in the eventthat A queries OCustodian on valid input STenci−1, where STenci was not supplied as an output from aprevious call to this oracle (excepting the special case where STenci−1 = Ci = ε). By Lemma B.2 wehave that Adv [Game 4−Game 3 ] ≤ ν4(λ).

Game 5. In this hybrid we modify the operation of S so that each ciphertext STenci is replaced with arandom string of equivalent length. By Lemma B.3, Adv [Game 5−Game 4 ] ≤ ν5(λ).

Game 6. In this hybrid we modify the operation of S such that when A queries the custodian repeatedlyon the same tuple (STenci, Inputi, ri, [πi, Oi]), all queries subsequent to the first query return the sameresult as the first query. Recall that by definition the ExecuteCustodian algorithm is deterministic;thus, repeated queries on the same input will always produce the same output. Hence by definitionAdv [Game 6−Game 5 ] = 0.

The advantage gained by the adversary in distinguishing between the Real experiment and the Idealexperiment is bounded by the negligible function ν1(λ) + · · ·+ ν5(λ) + 0. Thus by the hybrid lemma, thedistributions of Game 6 and Game 0 are computationally indistinguishable.

We now conclude with the following Lemmas.

Page 28: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

Lemma B.1 (Input Commitments are Unique) If the commitment scheme Σcom is binding, thenAdv [Game 2 ]−Adv [Game 1 ] ≤ ν2(λ).

Proof. Let (C0, . . . , Ck) be the sequence of valid commitments provided to the Custodian after the kthsuccessful invocation (i.e., those commitments for which A has provided a correct opening tuple). Recall thatfor i ≥ 0 each commitment is computed as Ci = Commit(Inputi‖Ci−1; ri) with the special case that the initialcommitment C0 = ε. For the abort condition of Game 2 to occur, there must be at least two duplicatecommitments in the list above, and argue that this event will occur with at most negligible probability if thecommitment scheme is binding. We proceed using induction.

Let us consider only a prefix of the list that consists of the commitments (C0, . . . , Cj) for some j ≥ 0.Clearly there can be no duplicate commitments if j ≤ 1. This would imply that there is either a singlecommitment, or that C1 = C0 = ε which is not possible by our definition of the commitment scheme. Nowlet us consider the inductive case. Let j > 1 be an integer such that (C0, . . . , Cj−1) contains no duplicatecommitments, but Cj = Cl for some 0 < l < j.19 Then we have two cases. In the first case if (Inputj , Cj−1, rj)= (Inputl, Cl−1, rl) then we have Cj−1 = Cl−1. However this violates our initial assumption that thereare no duplicates in the set (C0, . . . , Cj−1). This leaves only the second case where the two input tuplesare not equal, yet the resulting commitments are identical. If A causes this to occur with non-negligibleprobability, we can use A to construct an attacker that violates the binding property of the commitmentscheme with identical probability. Thus, under the assumption that commitments are binding, we have thatAdv [Game 2−Game 1 ] ≤ ν2(λ). �

Lemma B.2 (Unforgeability of State Ciphertexts) If the authenticated encryption scheme ΣAE satis-fies the AE definition, then Adv [Game 3−Game 2 ] ≤ ν3(λ).

Proof. If Adv [Game 3−Game 2 ] is non-negligible, this implies that A induces Eventauth with non-negligible probability. We show that such an adversary can be used to construct an adversary B that hasa non-negligible advantage against the authenticity definition of Rogaway [Rog02]. Let q be the maximumnumber of Custodian oracle calls that can be made by A. B runs A internally. It first samples an integerj ∈ [0, q) and runs the protocol of Game 3. For each query i 6= j, runs the protocol as usual. At the jthquery, B uses the AE challenge oracle to produce the output ciphertext STencj using the correct input and a0 message (of the same length) as the challenge pair. If A submits a forged value STencj+1 6= STencj thenB sends this value to the decryption oracle to learn the bit b. If A succeeds with probability ε, then B willsucceed with advantage ε/q. Since A can forge the state ciphertext with at most negligible probability, thenAdv [Game 3−Game 2 ] ≤ ν3(λ). �

Lemma B.3 (Indistinguishability of State Ciphertexts) If the authenticated encryption scheme ΣAE

satisfies the AE definition, then Adv [Game 5−Game 4 ] ≤ ν5(λ).

Proof. We assume ΣAE schemes satisfy the random ciphertext model of security: formally AdvprivΠ (A) =

Pr[K

$← K : AEK(·,·) = 1]− Pr

[A$(·,·) = 1

]. Let k be a random encryption key. We show that if A

can distinguish the distribution of Game 4, where STenci is a valid encryption of a state value, andGame 5 where STenci is a random string of the same language, we can construct an adversary B such thatAdvpriv

Π (B) > ν5(λ)/q. B samples a value j ∈ [1, q) and runs the protocol of Game 4. To answer A’s jthquery, B queries the encryption oracle and returns the result as STencj . Note that if the result is a correctencryption of the message, the distribution is as in Game 4. If the result is a random string, then thedistribution is as in Game 5. If A distinguishes the two distributions with non-negligible advantage, then Buses A’s output to obtain a non-negligible Advpriv

Π (B). Since we assume that the AE scheme is secure, thisproduces a contradiction. Thus Adv [Game 5−Game 4 ] ≤ ν5(λ). �

19We ignore the case where Cl = C0 = ε, since this cannot occur by definition.

Page 29: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

C A Multiparty Public Key Encryption SchemeIn this section we sketch a construction for the multi-party public key encryption scheme we use in §4.4.While the techniques used to construct this scheme are not novel, we include them for completeness. First,we briefly define multi-party public key encryption.

C.1 Multiparty Public Key Encryption (MPKE)LetM be a message space. A multiparty public key encryption (MPKE) scheme consists of three possiblyprobabilistic algorithms (PKMGen,PKMEnc,PKMDec) with the following description:

PKMGen(1λ)→ (pk , sk). On input a security parameter, the key generation algorithm outputs a public andsecret key.

PKMEnc(pk1, . . . , pkn,m)→ CT . On input a set of n public keys and a messagem, the encryption algorithmoutputs a single ciphertext CT .

PKMDec(pk , sk , CT )→ m′. On input a single public and secret key as well as a ciphertext CT , outputs adecrypted message or the distinguished error symbol ⊥.

Correctness. Intuitively, a MPKE scheme is correct if ∀m ∈ M, integers n > 0 and 0 < j ≤ n, and keyvectors (pk1, . . . , pkn) where each (pk i, sk i) ∈ PKMGen(1λ), the following relation holds with probability≥ 1− ν(λ) over the random coins of the experiment:

PKMDec(pk j , sk j ,PKMEnc(pk1, . . . , pkn,m)) = m.

Security. Let the random variable IND-MPKE-CCAnb (Π,A, λ) where b ∈ {0, 1}, integer n > 0, A = (A1,A2)and λ ∈ N denote the result of the following probabilistic experiment:

IND-MPKE-CCAnb (Π,A, λ)(pk1, sk1)← PKMGen(1λ), . . . , (pkn, skn)← PKMGen(1λ)

(m0,m1, z)← AOdec(·,·)1 (pk1, . . . , pkn) s.t. m0,m1 ∈M

y ← PKMEnc(pk1, . . . , pkn,mb)

B ← AOdec(·,·)2 (y, z)

Output B

We define Odec(i, CT ) to be a decryption oracle that, on input an integer i and a ciphertext CT outputsPKMDec(pk i, sk i, CT ) if CT 6= y, and otherwise outputs ⊥.

An MPKE scheme Π is IND-MPKE-CCA-secure for keysets of size n if ∀ p.p.t. algorithms A the following twoensembles are computationally indistinguishable:{

IND-MPKE-CCAn0 (Π,A, λ)}λ

c≈{IND-MPKE-CCAn1 (Π,A, λ)

Remark. We note that the decryption oracle Odec operates over a single public key pki and does not takeas input the remaining public keys. Our definition requires that decryption oracle operates correctly evenwithout access to this additional information.

C.2 An MPKE scheme from TBEWe now describe one exemplary construction of an MPKE scheme. The key ingredient in our constructionis a Tag-Based Encryption (TBE) scheme [MRY04]. TBE is a form of public-key encryption in which theencryption and decryption algorithm take an additional input called a tag, drawn from an exponentially-sized

Page 30: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

tag space. TBE schemes are known from a variety of underlying assumptions in the standard model, includingDDH in cyclic groups [CS04] and well-studied bilinear assumptions [Kil06].20

Tag-based Encryption. A TBE scheme consists of three (possibly probabilistic) algorithms (TBEKG,TBEEnc,TBEDec). Key generation operates identically to a public key encryption scheme. On input a publickey, a message m and a tag T , TBEEnc produces a ciphertext CT . Similarly, on input a public and secretkey, a ciphertext and a tag T , TBEDec outputs a message m′ or ⊥.

We require a TBE that provides selective tag CCA security (IND-sTBE-CCA). Intuitively this definitionis a variant of the classical IND-CCA2 definition, in which the adversary has access to a decryption oraclethat, on input a tag T and a ciphertext CT , provides the decryption of CT with tag T under the followingrestriction: the attacker may not query the decryption oracle on the specific tag T ∗ that was used to formulatethe challenge ciphertext. In the selective tag model the attacker selects the tag T ∗ but must do so at the startof the experiment, prior to receiving any public keys or parameters. See [Kil06, §3.2] for a formal definitionof the experiment.

The MPKE construction. Using TBE as a building block, we now describe our main construction ofMPKE. This approach employs the TBE-to-PKE construction independently proposed by MacKenzie etal. [MRY04] and Boneh et al. [CHK04]. LetM be the message space of the underlying TBE scheme, and letT be the scheme’s exponentially-sized tag space. We additionally require a SUF-CMA one-time signaturescheme OTS = (OTSKG,OTSSign,OTSVerify) where the public keyspace for the OTS is included in (or canbe mapped to) T .

PKMGen(1λ). Output (pk , sk)← TBEKG(1λ).

PKMEnc(pk1, . . . , pkn,m)→ CT . First, sample a keypair for the one-time signature (svk , ssk)← OTSKG().Let p represent a mapping between public keys and ciphertexts. Now for i = 1 to n computeTi ← svk and Ci ← TBEEnc(pk i, Ti,m) and σ ← OTSSign(svk , ssk , (p, C1, . . . , Cn)). Finally, outputCT ← (p, C1, . . . , Cn, σ).

PKMDec(pk , sk , CT ) → m′. First parse CT as (p, C1, . . . , Cn, σ). Now verify the signature σ usingOTSVerify(svk , (p, C1, . . . , Cn), σ). If this check outputs 0, return ⊥. Otherwise use p to identifythe ciphertext Ci corresponding to pk (if none is found, output ⊥). Set Ti ← svk and outputm′ ← TBEDec(pk , sk , Ti, Ci).

Security. We leave correctness as an exercise for the reader, and proceed with a brief sketch of the securityargument for the above scheme, which proceeds nearly identically to the arguments of [MRY04,CHK04].

Proof sketch. Let A be an adversary that succeeds with non-negligible advantage in the IND-MPKE-CCAexperiment against the MPKE scheme. We construct a second adversary B that succeeds with non-negligibleadvantage in the IND-sTBE-CCA experiment for the TBE scheme. At the start of the experiment, B generatesa keypair (svk∗, ssk∗) for the OTS, and selects T ∗ = svk . When B receives pk from the TBE challenger,it selects a random integer j ∈ [n] and sets pk j ← pk . For k = 1 to n, k 6= j it computes (pkk, skk)

← PKMGen(1λ) and retains the secret key. B now runs A on input (pk1, . . . , pkn). Whenever A queries thedecryption oracle on (CT, i) where svk = T ∗ B returns ⊥. Otherwise B first verifies the attached signatureand outputs ⊥ on failure. If the signature verifies and i = j, B queries the TBE decryption oracle on Ci andreturns the result. If i 6= j then B computes the decryption using the known secret key. To produce thechallenge ciphertext y, B transmits (m0,m1) to the TBE challenge oracle and sets Cj to be the result. Bsamples a random bit b′ and for k = 1 to n (and k 6= j) computes Ck ← TBEEnc(pk i, T

∗,mb) and then signsthe resulting vector (p, C1, . . . , Cn) using ssk to produce y. When A outputs a guess B, B outputs B as itsguess.

20Moreover, if TBE schemes are not available, then Identity-Based Encryption schemes can be used for TBE. This implies amuch wider range of assumptions from which these schemes may be constructed.

Page 31: Managing Secrets with Consensus Networks: Fairness, … · 2017-11-06 · Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control Gabriel Kaptchuk Johns

We argue that with probability 1/2 A will select b′ such that b′ = b, and thus the distribution of y will beidentical to that of the real protocol. We further argue that in this case all outputs from the decryption oracleOdec are distributed identically to the real protocol, which the exception of the case where svk = T ∗ andCT 6= y. In that one case, however, it is easy to show that B’s simulation will differ from the real experimentif and only if A is able to produce a new pair (σ,m) not signed by B such that this pair validates using svk .We argue that if A succeeds in this with non-negligible probability, that would imply a successful adversaryagainst the SUF-CMA property of the OTS. Therefore this condition must occur with at most negligibleprobability. Hence if A succeeds in the IND-MPKE-CCA experiment with advantage ε then B succeeds in theIND-sTBE-CCA experiment with advantage at least ε/2− ν(λ). �