DIMACS Workshop on Outsourcing Computation Securely

July 6 - 7, 2017
DIMACS Center, CoRE Building, Rutgers University

Organizers:
Marina Blanton, University at Buffalo, mblanton at buffalo.edu
Guy Rothblum, Weizmann Institute of Science, rothblum at alum.mit.edu
Michael Walfish, New York University, mwalfish at cs.nyu.edu
Presented under the auspices of the DIMACS Special Focus on Cryptography as part of the DIMACS/Simons Collaboration in Cryptography and the DIMACS Special Focus on Cybersecurity.

Abstracts:


Gilad Asharov, Cornell Tech

Title: Oblivious Computation with Data Locality

Oblivious RAM compilers, introduced by Goldreich and Ostrovsky [JACM'96], compile any RAM program into one that is "memory-oblivious" (i.e., the access pattern to the memory is independent of the input). All previous ORAM schemes, however, completely break the locality of data accesses (by shuffling the data to pseudorandom positions in memory).

In this work, we initiate the study of locality-friendly oblivious RAMsOblivious RAM compilers that preserve the locality of the accessed memory regions, while leaking only the lengths of contiguous memory regions accessed; we refer to such schemes as Range ORAMs. Our main results demonstrate the existence of a statistically-secure Range ORAM with only poly-logarithmic overhead (both in terms of the number of memory accesses, and in terms of locality). In our most optimized construction, the overhead is only a logarithmic factor greater than the best ORAM scheme (without locality).

To further improve the parameters, we also consider the weaker notion of a File ORAM: whereas a Range ORAM needs to support read/write access to arbitrary regions of the memory, a File ORAM only needs to support access to pre-defined non-overlapping regions (e.g., files being stored in memory). Assuming one-way functions, we present a computationally-secure File ORAM that, up to log log n factors matches the best ORAM schemes (i.e., we essentially get "locality for free".)

To the best of our knowledge, before our work, the only works combining locality and obliviousness were for the special case of symmetric searchable encryption [Cash and Tessaro (EUROCRYPT'14), Asharov et al. (STOC'16)]. Searchable encryption can be viewed as a special case of a "read-only" File ORAM which leaks whether the same files are accessed again, and whether files contain the same keyword; this leakage, however, has been shown to be harmful in many applications, and is prevented by the definition of a File ORAM.

Joint work with: Hubert Chan, Kartik Nayak, Rafael Pass, Ling Ren and Elaine Shi.


Esha Gosh, Brown University

Title: Hash First, Argue Later: Adaptive Verifiable Computations on Outsourced Data

Proof systems for verifiable computation (VC) have the potential to make cloud outsourcing more trustworthy. Recent schemes enable a verifier with limited resources to delegate large computations and verify their outcome based on succinct arguments: verification complexity is linear in the size of the inputs and outputs (not the size of the computation). However, cloud computing also often involves large amounts of data, which may exceed the local storage and I/O capabilities of the verifier, and thus limit the use of VC.

In this paper, we investigate multi-relation hash & prove schemes for verifiable computations that operate on succinct data hashes. Hence, the verifier delegates both storage and computation to an untrusted worker. She uploads data and keeps hashes; exchanges hashes with other parties; verifies arguments that consume and produce hashes; and selectively downloads the actual data she needs to access.

Existing instantiations that fit our definition either target restricted classes of computations or employ relatively inefficient techniques. Instead, we propose efficient constructions that lift classes of existing arguments schemes for fixed relations to multi-relation hash & prove schemes. Our schemes (1) rely on hash algorithms that run linearly in the size of the input; (2) enable constant-time verification of arguments on hashed inputs; (3) incur minimal overhead for the prover. Their main benefit is to amortize the linear cost for the verifier across all relations with shared I/O. Concretely, compared to solutions that can be obtained from prior work, our new hash & prove constructions yield a 1,400x speed-up for provers. We also explain how to further reduce the linear verification costs by partially outsourcing the hash computation itself, obtaining a 480x speed-up when applied to existing VC schemes, even on single-relation executions.


Shai Halevi, IBM Research

Title: Implementing Cryptographic Obfuscation

General-purpose code obfuscation is an amazingly powerful technique, letting us hide secrets in arbitrary running software. The emergence of plausible constructions for cryptographic general-purpose obfuscation has transformed our thinking about what can and cannot be done in cryptography. Unfortunately, the known constructions are all very inefficient, and their security is poorly understood.

In this presentation I will discuss recent attempts at implementing the current constructions, focusing mostly on our recent implementation from "graph-induced" graded encoding schemes.

Based on joint work with Tzipora Halevi, Victor Shoup, and Noah Stephens-Davidowitz.


Justin Holmgren, MIT

Title: Delegation with (Nearly) Optimal Time/Space Overhead

Consider a scenario in which a client wants to utilize a powerful, but untrusted, server to perform computations which are beyond the client's ability. Since the client does not trust the server, we would like the server to certify the correctness of the result. We would like the verification procedure to be extremely efficient, whereas the complexity of proving shouldn't be much larger than that of computing.

Assuming LWE, we construct a one-round argument-system for proving the correctness of any time T and space S computation, in which both the verifier and prover are extremely efficient. In particular, the prover runs in time T*poly(k) and space S + poly(k), where k is a security parameter, and the verifier runs in time poly(k, log T).

Our result builds on a previous line of work initiated by Kalai et al. (STOC, 2014). The prover in all of these works runs in time T^c for a large constant c (where c is at least 3). As an additional contribution we significantly simplify a central construct that appears in all of these works.


Yael Kalai, Microsoft Research

Title: A New Perspective on Delegating Computation -- In Honor of the 10 Year Anniversary of GKR

In this talk I will give an overview of the theoretical literature on delegation. I will focus on the GKR protocol, and present a new perspective on how to overcome the depth bound in the GKR protocol, and how it can be used to obtain a 2-message delegation scheme under the LWE assumption.


Dakshita Khurana, UCLA

Title: Distinguisher-Dependent Simulation in Two Rounds and its Applications

I will describe a new simulation technique for proving privacy of argument systems, that makes black-box use of the adversary as well as the distinguisher. We use this technique to construct several round-optimal arguments, many of which were previously unknown even using non-black-box simulation techniques. We obtain:

- Two-message witness indistinguishable (WI) arguments for NP from different assumptions than previously known.
- Two-message arguments, and three round arguments of knowledge for NP that achieve strong witness indistinguishability (strong WI), witness hiding (WH) and distributional weak zero-knowledge (WZK). These privacy properties are achieved only in a setting where the instance is determined by the prover in the last round of the interaction. The soundness of these protocols is guaranteed against adaptive provers.

At the core of our techniques is a proof that the Kalai-Raz (Crypto 2009) transform applied to Sigma-protocols, results in a two-message argument for NP that additionally has strong privacy properties. Our three-round protocols can be based on DDH or QR or N^th residuosity, and our two-round protocols require quasi-polynomial hardness of the same assumptions. Our simulation technique bypasses known lower bounds on black-box simulation (SICOMP 1996) by using the distinguisher's output in a meaningful way. We believe that this technique is likely to find additional applications in the future.

Based on joint work with Abhishek Jain, Yael Kalai and Ron Rothblum.


Kobbi Nissim, Georgetown and Harvard

Title: Accessing Data while Preserving Privacy

Efficient implementations of secure outsourced databases were shown to lack in privacy, mostly due to reliance on weak cryptographic constructs. We will ask use of strong cryptographic primitives directly implies privacy and suggest how to enhance security of outsourced databases using tools from differential privacy.

Based on joint work with Georgios Kellaris, George Kollios, and Adam O'Neill


Omer Paneth, MIT

Title: On Zero-Testable Homomorphic Encryption and Publicly Verifiable Non-Interactive Arguments

We define and study zero-testable homomorphic encryption (ZTHE) -- a semantically secure, somewhat homomorphic encryption scheme equipped with a weak zero test that can identify trivial zeros. These are ciphertexts that result from homomorphically evaluating an arithmetic circuit computing the zero polynomial over the integers. This is a relaxation of the (strong) zero test provided by the notion of graded encodings, which identifies all encodings of zero.

We show that ZTHE can suffice for powerful applications. Based on any ZTHE scheme that satisfies the additional properties of correctness on adversarial ciphertexts and multi-key homomorphism, we construct publicly verifiable non-interactive arguments for delegating computation. Such arguments were previously constructed from indistinguishability obfuscation or based on so-called knowledge assumptions. The arguments we construct are adaptively sound, based on an efficiently falsifiable assumption, and only make black-box use of the underlying cryptographic primitives.

As a proof of concept, we construct ZTHE that is sufficient for our application based on an efficiently falsifiable assumption over strong graded encodings.

Joint work with Guy Rothblum.


Charalampos Papamanthou, University of Maryland

Title: Practical Searchable Encryption for Data on Disk

Searchable encryption (SE) allows a client to outsource a dataset to an untrusted server while enabling the server to answer keyword queries in a private manner. To scale SE to big data using external memory, new schemes with small locality have been proposed, where locality is defined as the number of non-continuous reads that the server makes for each query. Previous space-efficient SE schemes achieve optimal locality by increasing the read efficiency---the number of additional memory locations (false positives) that the server reads per result item. This can hurt practical performance.

In this talk, I will describe new SE schemes with tunable locality and linear space. The first scheme has optimal locality and outperforms existing approaches (that have a slightly different leakage profile) by up to 2.5 orders of magnitude in terms of read efficiency, for all practical database sizes. Another version of this construction with the same leakage as previous works can be tuned to have bounded locality, optimal read efficiency and up to 60x more efficient end-to-end search time. Our new SE schemes work fast in in-memory as well, leading to search time savings of up to 1 order of magnitude when compared to the most practical in-memory SE schemes. Finally, our construction can be tuned to achieve trade-offs between space, read efficiency, locality, parallelism and communication overhead.


Bryan Parno, CMU

Title: Making Verifiable Computation Useful

While verifiable computation (VC) schemes have lately seen tremendous improvement and even limited deployments, significant work remains to make VC useful for a broader class of practical applications. In this talk, we will discuss one particularly intriguing application for VC, namely improving the efficiency, privacy, and integrity of the X.509 public key infrastructure. To support applications in which a client outsources both storage and computation to an untrusted party, we will present multi-relation hash & prove schemes for verifiable computations that operate on succinct data hashes. Finally, we will discuss some open challenges to making VC truly useful.


Raluca Popa, UC Berkeley

Title: Opaque: An Oblivious and Encrypted Distributed Analytics Platform

Many systems run rich data analytics on sensitive data in the cloud, but are prone to data breaches. While fully homomorphic encryption supports arbitrary functions, it remains far too slow for many systems today. A recent hardware enclave architecture promises data confidentiality and isolated execution of arbitrary computations, yet still suffers from leakage due to memory and network accesses patterns.

In this talk, I will describe Opaque, a distributed data analytics platform leveraging hardware enclaves. Even a compromised operating system sees only encrypted data. Opaque also protects against leakage from memory and network accesses outside of the enclave (a property called obliviousness). To accomplish this goal, Opaque introduces new distributed oblivious relational operators, as well as new query planning techniques.

Opaque is implemented on Spark SQL with few changes to the underlying system. It provides data encryption, authentication, and computation verification with a performance ranging from 52% faster to 3.3x slower than vanilla Spark SQL; obliviousness comes with a 1.646x overhead. At the same time, Opaque provides an improvement of three orders of magnitude over state-of-the-art oblivious protocols.

Joint work with W. Zheng, A. Dave, J. G. Beekman, J. E. Gonzalez, and I. Stoica.


Michael Riabzev, Technion

Title: Scalable Transparent ARguments-of-Knowledge

There are various theoretically-efficient constructions of public-randomness (i.e., Arthur-Merlin type) Zero-Knowledge Succinct Arguments of Knowledge in the random oracle model, for computations-verification (also known as verifiable-computation and computational-integrity). Those constructions could be used to solve many real world problems; Unfortunately, reducing those fabulous theoretical systems to broadly-adaptable implementations is a challenging task. This talk surveys our efforts to construct such systems with concrete (rather than asymptotic) efficiency. This requires improving both theory and practice in a concerted effort, and led to an efficient proof-of-concept implementation supported by new theoretical results and which leads to interesting theoretical and practical questions.

This is a joint work with Eli Ben-Sasson, Iddo Ben-Tov, and Yinon Horesh.


Ron Rothblum, MIT

Title: Doubly Efficient Interactive Proofs

Doubly efficient interactive proofs are protocols for convincing a verifier of the correctness of a computational statement, of the form f(x)=y, such that the (honest) prover runs in time proportional to time(f) and the verifier runs in time that is much smaller than time(f) (where time(f) is the time that it takes to just compute f by yourself).

In my talk I will survey the state of the art in doubly efficient interactive proofs, focusing on protocols that are unconditional (i.e., do not rely on any unproven assumptions) and guarantee correctness even against adversaries that are computationally unbounded. Most of the talk will be devoted to one such protocol from a recent work with Omer Reingold and Guy Rothblum.


Srinath Setty, Microsoft Research

Title: Implementations of Probabilistic Proofs: Survey and Next Steps

Probabilistic proofs and succinct arguments offer a powerful primitive for building many security applications. There has been lot of activity in refining this theory to reduce its costs and to improve its expressiveness. This talk will survey those efforts and identify pressing problems.


abhi shelat, Northeastern University

Title: Survey of Sub-circuit-size Zero Knowledge

We compare several recent ZK constructions that achieve communication that is sub-linear in the size of the circuit that represents the ZK relation. We then present a new construction based on the classic BGGHKMR89 result that shows how interactive proofs can be compiled into interactive ZK arguments. Our approach does not rely on any PCP constructions, but instead relies on the proof systems used in the earlier "Verifiable ASICS" session, thus making our construction easily implementable. This is preliminary work with Riad Wahby, Ioanna Tzialla, Justin Thaler, and Mike Walfish.


Elaine Shi, Cornell

Title: Tutorial: Rethinking Large-Scale Consensus Through Blockchains

Fueled by the success of decentralized cryptocurrencies, financial institutions and other industry sectors alike are stoked about building new applications with "blockchains" as the enabler. In this tutorial, I will describe the challenges raised by large-scale consensus through the lens of blockchains. I will seek to answer questions of the following nature:
1) why the 30 years of work on distributed systems fail to solve the emerging challenges of large-scale consensus;
2) some simple but (hopefully) insightful lower bounds that illustrate why permissionless consensus is fundamentally different from classical consensus;
3) why blockchain-style consensus is a theoretical breakthrough;
4) what lessons we can draw from Nakamoto's original blockchain protocol for the classical, permissioned setting;
5) what are the painpoints we face for large-scale consensus today; why many industry consortia and companies are all rushing to roll their own consensus implementation; and
6) what might be the "dream protocol" for large-scale consensus.


Eran Tromer, Tel Aviv / Columbia

Title: The State of the SNARK: Practical Applications of Noninteractive Arguments

Secure outsourcing of computation via zk-SNARK (zero-knowledge succinct non-interactive argument of knowledge) protocols has seen rapid progress in recent years: both in fundamental theory, and in constructions. Some of these constructions are now used in realistic prototypes and even deployed in real-world systems. This talk will survey the state of the art such practical applications, stressing the properties that drive practicality and the challenges at hand.


Muthu Venkitasubramaniam, University of Rochester

Title: Ligero: Lightweight Sublinear Arguments Without a Trusted Setup

We design a simple zero-knowledge argument protocol for NP from any collision-resistant hash function, whose communication complexity is proportional to the square-root of the verification circuit size. This is the first sublinear argument protocol for NP that simultaneously avoids heavy PCP machinery and the use of public-key cryptography. The protocol can be made non-interactive in the random oracle model and does not require a trusted setup. The protocol combines an optimized variant of the technique of Ishai, Kushilevitz, Ostrovsky, and Sahai (STOC 2007) with efficient techniques for secure multiparty computation in the setting of an honest majority.

The simplicity of our protocol makes it attractive even for realistic circuit sizes. For 2^{-40} soundness error, the proof length for a verification circuit of size s is roughly 1771*sqrt(s) bits. In the case of verifying a single SHA-256 computation, the proof is roughly 90KB long, and both the prover and verifier run in about a second. This length is roughly 4 times shorter than a similar proof of ZKB++ (Chase et al., eprint 2017), an optimized variant of ZKBoo (Giacomelli et al., USENIX 2016). Our efficiency advantages become even bigger in an amortized setting, where the amortized time complexity of the verifier can be improved dramatically.

Joint work with Carmit Hazay and Yuval Ishai.


Riad Wahby, Stanford University

Title: Full accounting for verifiable outsourcing

Systems for verifiable outsourcing incur costs for a prover, a verifier, and precomputation; outsourcing makes sense when the combination of these costs is cheaper than not outsourcing. Yet, when prior works impose quantitative thresholds to analyze whether outsourcing is justified, they generally ignore prover costs. Verifiable ASICs (VA)---in which the prover is a custom chip---is the other way around: its cost calculations ignore precomputation.

In recent work we have developed a new VA system, called Giraffe. We charge Giraffe for all three costs and identify regimes where outsourcing is worthwhile. Giraffe's base is an interactive proof geared to data parallel computation [Thaler, CRYPTO13]. Giraffe makes this protocol asymptotically optimal for the prover. Giraffe also develops a design template that produces hardware designs automatically for a wide range of parameters, introduces hardware primitives molded to the protocol's data flows, and incorporates program analyses that expand applicability. Giraffe wins even when outsourcing several tens of sub-computations, scales to 500x larger computations than prior work, and can profitably outsource parts of programs that are not worthwhile to outsource in full.

Joint work with Ye Ji, Andrew J. Blumberg, abhi shelat, Justin Thaler, Michael Walfish, and Thomas Wies


David Wu, Stanford University

Title: Lattice-Based SNARGs and Their Application to More Efficient Obfuscation

Succinct non-interactive arguments (SNARGs) enable verifying NP computations with substantially lower complexity than that required for classical NP verification. In this work, we first construct a lattice-based SNARG candidate with quasi-optimal succinctness (where the argument size is quasilinear in the security parameter). Further extension of our methods yields the first SNARG (from any assumption) that is quasi-optimal in terms of both prover overhead and succinctness. Central to our construction is a new notion of linear-only vector encryption which is a generalization of the notion of linear-only encryption introduced by Bitansky et al. (TCC 2013). Together with new information-theoretic approaches for building statistically-sound linear PCPs over small finite fields, we obtain the first quasi-optimal SNARGs.

We then show a surprising connection between our new lattice-based SNARGs and the concrete efficiency of program obfuscation. All existing obfuscation candidates currently rely on multilinear maps. Among the constructions that make black-box use of the multilinear map, obfuscating a circuit of even moderate depth (say, 100) requires a multilinear map with multilinearity degree in excess of 2^100. In this work, we show that an ideal obfuscation of both the decryption function in a fully homomorphic encryption scheme and a variant of the verification algorithm of our new lattice-based SNARG yields a general-purpose obfuscator for all circuits. Finally, we give some concrete estimates needed to obfuscate this "obfuscation-complete" primitive. We estimate that at 80-bits of security, a (black-box) multilinear map with ≈2^12 levels of multilinearity suffices. This is over 2^80 times more efficient than existing candidates.

Joint work with Dan Boneh, Yuval Ishai, and Amit Sahai


Yupeng Zhang, University of Maryland

Title: vSQL: Verifying Arbitrary SQL Queries over Dynamic Outsourced Databases

Cloud database systems such as Amazon RDS or Google Cloud SQL enable the outsourcing of a large database to a server who then responds to SQL queries. A natural problem here is to efficiently verify the correctness of responses returned by the (untrusted) server. In this paper we present vSQL, a novel cryptographic protocol for publicly verifiable SQL queries on dynamic databases. At a high level, our construction relies on two extensions of the CMT interactive-proof protocol [Cormode et al., 2012]: (i) supporting outsourced input via the use of a polynomial-delegation protocol with succinct proofs, and (ii) supporting auxiliary input (i.e., non-deterministic computation) efficiently. Compared to previous verifiable-computation systems based on interactive proofs, our construction has verification cost polylogarithmic in the auxiliary input (which for SQL queries can be as large as the database) rather than linear.

In order to evaluate the performance and expressiveness of our scheme, we tested it on SQL queries based on the TPC-H benchmark on a database with 6 x 10^6 rows and 13 columns. The server overhead in our scheme (which is typically the main bottleneck) is up to 120 x lower than previous approaches based on succinct arguments of knowledge (SNARKs), and moreover we avoid the need for query-dependent pre-processing which is required by optimized SNARK-based schemes. In our construction, the server/client time and the communication cost are comparable to, and sometimes smaller than, those of existing customized solutions which only support specific queries.


Previous: Program
Workshop Index
DIMACS Homepage
Contacting the Center
Document last modified on July 6, 2017.