DIMACS Workshop on Cryptographic Protocols in Complex Environments

May 15 - 17, 2002
DIMACS Center, Rutgers University, Piscataway, NJ

Organizers:
Ran Canetti, IBM Watson Research Center, canetti@watson.ibm.com
Ueli Maurer, ETH Zurich, maurer@inf.ethz.ch
Rafail Ostrovsky, Telcordia Technologies, rafail@research.telcordia.com
Presented under the auspices of the Special Focus on Next Generation Networks Technologies and Applications.

Abstracts:




Boaz Barak, Weizmann Institute of Science

Title: Coin-Tossing With a Man in the Middle or Realizing the Shared Random String Model

We consider the model where an adversary is controlling the communication channel between two parties. This model has been extensively studied, and there are several cryptographic protocols that have been shown secure in this model. However, many of the previous protocols require that the two parties have access to a public string that was chosen at random by some trusted dealer. This is called the shared random string model.

In this work we present a coin-tossing protocol that allows us to transform many protocols that are secure in the shared random string model to protocols that are secure in the standard model (where we assume no trusted dealer). Using this transformation we give a positive theoretical (although not practical) answer to several open problems. For example, we present the first constant-round non-malleable zero-knowledge protocol and the first constant-round non-malleable commitment scheme in the standard (without a shared string) man-in-the-middle model.

Some of the techniques we use (e.g., the use of exponential-time constructible probability ensembles) have not been used before in the design of cryptographic protocols. Thus, there is hope that these techniques could be used to resolve other open questions.

Joint work with Alon Rosen.


Claude Crepeau

Title: Secure Multi-party Quantum Computation

Secure multi-party computing, also called secure function evaluation, has been extensively studied in classical cryptography. We consider the extension of this task to computation with quantum inputs and circuits. Our protocols are information-theoretically secure, i.e. no assumptions are made on the computational power of the adversary. For the weaker task of verifiable quantum secret sharing, we give a protocol which tolerates any $t < n/4$ cheating parties (out of $n$). This is shown to be optimal. We use this new tool to show how to perform any multi-party quantum computation as long as the number of dishonest players is less than $n/6$.


Stefan Dziembowski, ETH

Title: Tight Security Proofs for the Bounded-Storage Model

The talk is based on the STOC 2002 paper (under the same title).

In the bounded-storage model for information-theoretically secure encryption and key-agreement one can prove the security of a cipher based on the sole assumption that the adversary's storage capacity is bounded, say by $s$ bits, even if her computational power is unlimited. Assume that a random $t$-bit string $R$ is either publicly available (e.g. the signal of a deep space radio source) or broadcast by one of the legitimate parties. If $s All previous results in the bounded-storage model were partial or far from optimal, for one of the following reasons: either the secret key $K$ had in fact to be longer than the derived one-time pad, or $t$ had to be extremely large ($t>ns$), or the adversary was assumed to be able to store only actual bits of $R$ rather than arbitrary $s$ bits of information about $R$, or the adversary could obtain a non-negligible amount of information about $X$.

In the paper which the talk is based on, we prove the first non-restricted security result in the bounded-storage model, exploiting the full potential of the model: $K$ is short, $X$ is very long (e.g. gigabytes), $t$ needs to be only moderately larger than $s$, and the security proof is optimally strong. In fact, we prove that $s/t$ can be arbitrarily close to $1$ and hence the storage bound is essentially optimal.

In the talk I will present the model and sketch the main proof ideas.


Stuart Haber, InterTrust

Title: SHIP: Secure Human Input Protocol

We introduce protocols for secure communication between a human user and a secure application, defeating attacks by an eavesdropping automated adversary, and requiring no computational aids for the user. In particular, our protocols can serve a user who needs to send a password or PIN securely via an insecure channel, for example a suspicious keyboard attached to a virus-ridden PC in a seedy cyber-cafe.

Our protocols make novel use of any one of several techniques that attempt to distinguish between a human user and a computer program. Researchers have recently studied these techniques, variously known as reverse Turing tests (RTTs), mandatory human participation protocols, and CAPTCHAs, and they are currently used in commercial deployments: Yahoo!, AltaVista, and PayPal use them to ensure that only human users sign up for certain of their services. A typical implementation of such a test presents the user with a distorted text image, presumably one that cannot be parsed by current OCR programs, and requires the subject of the test to type in the characters in the image.

In this paper we propose precise definitions of the difficulty of an RTT and of the security of our communication protocols, and then give concrete-security reductions from the security of our protocols to the difficulty of the RTTs that we use. Our reductions are efficient, and they demonstrate that our password protocols are precisely as secure (against the attacks we consider) as the RTTs are difficult to fool. We believe that the use of reductions from one computational problem to another may be helpful in further study of the part that human users play in complex systems using cryptographic mechanisms.

This is joint work with Benny Pinkas (InterTrust) and Bob Tarjan (Princeton).


Ari Juels, RSA Research

Title: Coercion-free Voting

I will provide a brief overview of the literature on "receipt-free" voting. Our feeling is that current, working definitions in this area can benefit from both extension and formalization, and indeed that some forms of voter coercion - particularly those made possible by online voting - have not received adequate attention. I'll discuss our effort to elaborate a solid attack model descriptive of real-world voting scenarios. I'll present a protocol that aims within this model to free the electoral process from the broadest possible spectrum of threats, including many forms of duress and vote-buying.

Joint work with Anand Desai, Markus Jakobsson, and C. Andy Neff.


Serge Fehr, Aarhus University

Title: Optimal Black-Box Secret Sharing over Arbitrary Abelian Groups

A black-box secret sharing scheme for the threshold access structure $T_{t,n}$ is one which works over any finite Abelian group $G$. Briefly, such a scheme differs from an ordinary linear secret sharing scheme (over, say, a given finite field) in that distribution matrix and reconstruction vectors are defined over $\Z$ and are designed independently of the group $G$ from which the secret and the shares may be sampled. This means that privacy and completeness are guaranteed regardless of which group $G$ is chosen. We define the black-box secret sharing problem as the problem of devising, for an arbitrary given $T_{t,n}$, a scheme with minimal expansion factor, i.e., where the length of the full vector of shares divided by the number of players $n$ is minimal.

The relevance of such schemes to threshold cryptography has been recognized since the late 80s, for example in distributed cryptosystems based on groups with secret order. Since then, efforts have been made to provide a satisfactory solution to the black-box secret sharing problem. In 1994 Desmedt and Frankel have proposed an elegant approach based in part on polynomial interpolation over cyclotomic number fields. For arbitrary given $T_{t,n}$ with $0 Using low degree integral extensions of $\Z$ over which there exists a pair of sufficiently large Vandermonde matrices with co-prime determinants, we construct, for arbitrary given $T_{t,n}$ with $0 This is joint work with Ronald Cramer (Aarhus).


Joan Feigenbaum, Yale

Title: Agents' privacy in distributed algorithmic mechanisms

In traditional theoretical computer science (TCS), computational agents are typically assumed either to be obedient (i.e., to follow the prescribed algorithm) or to be adversaries who "play against" each other. On the other hand, the strategic agents in economic theory are neither obedient nor adversarial. Although one cannot assume that they will follow the prescribed algorithm, one can assume that they will respond to incentives. Thus, the economics literature traditionally stressed incentives and downplayed computational complexity, and the TCS literature traditionally did the opposite. The emergence of the Internet as a standard platform for distributed computation has radically changed this state of affairs: Ownership, operation, and use by many self-interested, independent parties give the Internet the characteristics of an economy as well as those of a computer.

To date, the TCS community's work on "algorithmic mechanism design" has focused on "truthful" mechanisms. The overall approach, which is consistent with the approach in the economics literature, is to design mechanisms that are "incentive-compatible" in the technical sense that strategic agents cannot improve their welfare by lying about their private inputs. The well known Revelation Principle ensures that, if there is a mechanism that achieves a given economic design goal, then there is a truthful mechanism that achieves the same goal. The premise of this overall approach is that agents will voluntarily reveal their private information if it can be proven that lying does them no good in the situation addressed by this particular mechanism-design exercise.

We question this premise. In this talk, we will first explain why direct-revelation mechanisms may be unacceptable. Next, we will examine in detail the extent to which the theory of secure, multiparty function evaluation, as it has evolved in the cryptographic research community, can be applied in the domain of algorithmic mechanism design.


Juan Garay, Bell-Labs Lucent

Title: Reusable Time-Lines

While probably the most important pillar of cryptography is the believed existence of hard problems, the notion of {\em moderately hard} problems also bears promise. A moderately hard problem is one which is not computationally infeasible to solve, but also not easy, meaning that a solution can be obtained after a certain - verifiable - amount of work. One such candidate problem is modular exponentiation. Indeed, modular exponentiation has been the basis for recent work in timed-release cryptography, including ``time-lock'' puzzles by Rivest, Shamir and Wagner, and timed commitments by Boneh and Naor.

In this work we build on the Boneh-Naor structure for timed commitments, which we call a ``time-line.'' In a nutshell, a time-line is a vector of elements, where each element is obtained by iterated squaring of the previous element. We first introduce the notion of {\em derived} time-lines, which are time-lines that are obtained by ``shifting'' of a master time-line. While expensive proofs are needed to verify the correctness of a master time-line, a much simpler and less expensive verification of correct shifting can be used. As a result, existing applications using time-lines (including timed commitments, electronic contract signing, and concurrent zero-knowledge proofs) benefit from the efficiency gain.

Additionally, our derived time-line construction allows us to solve a problem not addressed previously: the timed release of {\em standard} digital signatures (RSA, DSS, Schnorr). Such signatures, once released, cannot be distinguished from signatures of the same type obtained without a timed release, making it transparent to an observer of the end result. We demonstrate how to achieve a transparent timed release of RSA signatures.

This is joint work with Markus Jakobsson.


Yuval Ishai, Princeton and Technion

Title: Round-Efficient Secure Computation via Randomizing Polynomials

Most general-purpose secure computation protocols can evaluate low-degree polynomials in a small number of rounds. This motivates the following question: can the secure computation of complex functions be reduced to that of constant-degree polynomials? We show how to obtain such reductions using randomizing polynomials -- a natural relaxation of the standard representation of functions by polynomials. In particular, we will describe recent constructions of {\em perfect} randomizing polynomials, allowing perfectly secure computation in a worst-case constant (rather than expected-constant) number of rounds.

The talk is based on joint works with Eyal Kushilevitz (FOCS '00, ICALP '02).


Markus Jakobson, RSA Research

Title: Fractal Hash Sequence Representation and Traversal

We study methods for obtaining efficient authentication for lightweight computational devices, and propose a method that substantially improves on known techniques. Namely, we introduce a novel amortization technique for computation of consecutive preimages of hash chains from a common seed. While all previously known techniques have a memory-times-computational complexity of O(n) per chain element, the complexity of our technique can be upper bounded at O(log^2 n). Our technique uses a logarithmic number of ''helper values'' associated with points on the hash chain. The locations of these helper values are modified over time. Like fractals, where images can be found within images, the helper values move within intervals and sub-intervals according to a highly symmetric pattern.

Papers "Fractal Hash Sequence Representation and Traversal" and "Almost Optimal Hash Sequence Traversal" are available at http://www.markus-jakobsson.com.


Jonathan Katz, Columbia University

Title: Efficient and Non-Malleable Proofs of Plaintext Knowledge and Applications

We describe efficient protocols for non-malleable (interactive) proofs of plaintext knowledge for the RSA, Rabin, Paillier, and El-Gamal encryption schemes whose security can be proven in the standard model. We also discuss applications of these protocols to: (1) chosen-ciphertext secure, interactive encryption; (2) password-based authentication and key exchange in the public-key model; and (3) deniable authentication. In each case, we show very practical solutions (where we ensure that our constructions are secure even when run in an asynchronous, concurrent environment) based on, e.g., the RSA or factoring assumptions.

Our techniques provide a general methodology for constructing efficient non-malleable proofs of knowledge when shared parameters are available (for our intended applications, these parameters are simply included as part of the already-required public keys). Thus, non-malleable proofs of knowledge are easy to achieve ``in practice''.


Angelos D. Keromytis, Columbia University

Title: Key Management for IPsec: Background and Challenges

IPsec is a protocol suite for providing security at the network layer. Because of the different scenarios in which it may be employed, considerable requirements in terms of flexibility and security are placed on the key negotiation and management protocol/component.

IKE (RFC 2409), the current standard, has a number of deficiencies. The three most important are the high number of protocol rounds, its vulnerability to denial-of-service attacks, and the complexity of its specification. While it may be possible to ``patch'' the protocol to fix some of these problems, we would prefer to replace IKE with something better. With that in mind, we set out to engineer a new key exchange protocol specifically for Internet security applications. With a view toward its possible role as a successor to IKE, we call our new protocol ``JFK,'' which stands for ``Just Fast Keying.''

In this talk, I will provide a quick survey of the IPsec protocols, with particular emphasis on the proposed and used key management protocols. I will then discuss problems with the current standard, and the proposed JFK and LBJ protocols.

Joint work with Bill Aiello, Steve Bellovin, Matt Blaze, Ran Canetti, John Ioannidis, and Omer Reingold.


Philip MacKenzie, Bell- Labs Lucent

Title: On Designing Efficient Cryptographic Protocols for the Internet

Cryptographic protocols designed for the Internet must allow for concurrent (and unsynchronized) executions. However, many cryptographic protocols that have been developed can only be proven secure assuming synchronized and/or sequential execution. Those that can be proven secure for concurrent executions are often less efficient, for instance, requiring a non-constant number of rounds. We present two efficient constant-round protocols that can be proven secure for concurrent executions, namely (1) two-party DSA signature generation and (2) threshold password-authenticated key exchange. The key to the design of both of these protocols is the use of a verifiable encryption technique to avoid the need for zero-knowledge proofs of knowledge. We discuss this technique and its applicability.

Protocol (1) was developed jointly with Mike Reiter, and protocol (2) was developed jointly with Tom Shrimpton and Markus Jakobsson.


Tal Malkin, AT&T

Title: Secure Multiparty Computation of Approximations

Approximation algorithms are often useful in obtaining efficient solutions to problems where no efficient exact computation is known, for example when the input data is extremely large, or when the problem is NP-hard. Secure multiparty computation allows mutually distrustful parties to compute a function of their inputs without revealing unnecessary information. Both secure multiparty computation and approximation algorithms are major research fields, and a rich body of work has emerged in the last decades in these two areas. However, some settings require the benefits offered by both concepts. For example, two companies may want to compute joint statistics on their data, which is typically huge, without revealing any additional information. Trying to combine techniques of secure multiparty computation with approximation algorithms is complex, and if done naively the resulting algorithm may be neither efficient, nor secure. Here, we initiate the study of secure multiparty computation of approximations, realizing most of the cost savings available by using an approximation algorithm, without losing the privacy of a secure computation.

We make three contributions: First, we introduce the concept and give formal definitions of secure multiparty approximate computation. Second, we present efficient private approximation protocols for distance functions, which are useful for computations on massive data sets. In particular, we give an efficient, sublinear-communication, private approximation for the Hamming distance, and an efficient, polylogarithmic-communication solution for the $L^{2}$ distance in a relaxed model. Finally, we give an efficient private approximation of the permanent and other related \#P-hard problems.

This talk is based on joint work with J. Feigenbaum, Y. Ishai, K. Nissim, M. Strauss, and R. Wright.


Ueli Maurer, ETH

Title: A very simple secure multi-party computation protocol

We present a very simple approach to information-theoretically secure multi-party computation. Unlike all previous approaches, it is based on essentially no mathematical structure (like bivariate polynomials or zero-knowledge proofs), and it naturally yields protocols secure against general adversary structures involving both passive and active corruption. While the protocol is very efficient for a small number of players and possibly the preferred protocol from a practical viewpoint, it is typically very inefficient for a large number of players. Perhaps its main applications are for proving theoretical results and for didactic purposes.


Benny Pinkas, Intertrust

Title: Securing Internet Sealed-Bid Auctions

The talk will give a survey of existing technologies for running Internet auctions. In addition to the survey we will describe the [NPS] architecture for executing protocols for sealed bid auctions and, more generally, mechanism design. This system enables to preserve the privacy of the inputs of the participants (so that no nonessential information about them is divulged, even a posteriori) while maintaining communication and computational efficiency. This goal is achieved by adding another party - the auction issuer - that generates the programs for computing the auctions but does not take an active part in the protocol. The auction issuer is not a trusted party, but is assumed not to collude with the auctioneer. In the case of auctions, barring collusion between the auctioneer and the auction issuer, neither party gains any information about the bids, even after the auction is over. Moreover, bidders can verify that the auction was performed correctly. The bidders in the protocol are only required to send their bid to the auctioneer, and the overall communication and computational efficiency are very reasonable.


Tal Rabin

Title: On the Composition of Authenticated Byzantine Agreement

Byzantine Agreement was introduced as the algorithmic implementation for a broadcast channel in a setting where only point-to-point communication is available.

Lamport et al. showed that in order to reach agreement where there are n parties out of which t might be faulty, it must hold that t < n/3. They further showed that augmenting the network with a public-key infrastructure results in protocols for any t < n. This augmented problem is called ``authenticated Byzantine Agreement.''

The security of the authenticated BA was proven for a stand alone invocation of the protocol. Yet clearly, it should be proven for multiple invocations as it will need to substitute a broadcast channel.

We present the surprising impossibility results that: 1. Authenticated Byzantine Agreement cannot be composed in parallel (even twice) if the number of faulty parties is greater than n/3. 2. For the case of sequential composition of deterministic protocols we prove that it is impossible to compose protocols that would have a bounded number of rounds.

In contrast, we present some positive results by giving randomized protocols that compose sequentially. We exhibit two such protocols for different parameters.

This is joint work with Y. Lindell and A. Lysyanskaya


Omer Reingold, AT&T

Title: Exploring the Worlds Between Minicrypt and Cryptomania

We study the relationships among some of the most fundamental primitives and protocols in cryptography: public-key encryption (i.e. trapdoor predicates), oblivious transfer (which is equivalent to general secure multi-party computation), key agreement, trapdoor functions, and trapdoor permutations. In spite of the importance of these primitives, only little was known about the relationships among them. We resolve all the missing relationships, with respect to black-box reductions. In particular, we show that public-key encryption and oblivious transfer are incomparable under black-box reductions. Furthermore, relative to black-box reductions, neither oblivious transfer nor 1:1 trapdoor functions imply trapdoor permutations. Finally, we show that there is no black-box reduction from trapdoor functions to public-key encryption. Our negative results follow the oracle separation paradigm introduced by Impagliazzo and Rudich, as well as a new related model of separation.

One way to interpret our results is as an indication that obtaining a simple and clean view of cryptography is even further beyond our reach than might have been previously suspected, as the picture of cryptography is not likely to simplify in some of its most central concepts. Borrowing from the terminology of Impagliazzo, our results demonstrate that there are many possible worlds that lie between Minicrypt (the world that has only one-way functions and primitives implied by them) and Cryptomania (the world that contains trapdoor permutations and the rich cryptographic possibilities that they offer).

This talk is mostly based on joint work with Y. Gertner, S. Kannan, T. Malkin, and M. Viswanathan (FOCS 2000), and with Y. Gertner and T. Malkin, (FOCS 2001).


Amit Sahai, Princeton

Title: Proof Systems and Chosen-Ciphertext Security

In this talk, we survey recent developments in the theory of public-key encryption schemes secure against adaptive chosen-ciphertext attack. In particular, many of these results rely on new types of proof systems. In this talk, we introduce in particular a new type of proof system known as a simulation-sound zero-knowledge proof. We then show how the non-interactive form of this notion can be used to achieve adaptive chosen-ciphertext security in a simple manner. We also discuss recent new constructions of simulation-sound NIZK proofs due to Cramer and Shoup, called Universal Hash Proofs, and the quite efficient cryptosystems that result.


Luca Trevisan, Berkeley

Title: On the efficiency of two generic cryptographic constructions

We consider the efficiency of black-box constructions of pseudorandom generators (PRG) and of universal one-way hash functions (UOWHF) starting from one-way permutations (OWP).

We show that every black-box construction of a PRG stretching n bits into n+k bits using a OWP of security S must evaluate the permutation at least Omega(k/log S) times, a bound matched by the Blum-Micali-Yao construction. We also show that every black-box construction of a UOWHF mapping n bits into n-k bits using a OWP of security S must evaluate the permutation at least Omega(k/log S) times, a bound matched by the Naor-Yung construction.

We also discuss various possible formalizations of the notions of "black-box construction" and "black-box reduction" and discuss which known negative results hold with respect to the various formalizations.

(The lower bounds for PRGs and UOWHF are joint work with Rosario Gennaro. The classification of black-box models is due to Omer Reingold and Salil Vadhan.)


Rebecca Wright, DIMACS

Title: Tight Bounds for Shared Memory Systems Accessed by Byzantine Processes

We provide efficient constructions and strong lower bounds for shared memory systems accessed by n processes, up to t of which may exhibit Byzantine faults, in a model previously explored by Malkhi et al. (DISC 2000). We show that sticky bits are universal in the Byzantine failure model provided n is at least 3t+1, an improvement over the previous result requiring n to be at least (2t+1)(t+1). Our result follows from a new strong consensus construction that uses sticky bits and tolerates t Byzantine failures among n processes for any n > 3t. (This is the best possible bound on n for strong consensus.) We also present tight bounds on the efficiency of implementations of strong consensus objects from sticky bits and similar primitive objects. Authors: Michael Merritt, Omer Reingold, Gadi Taubenfeld, and Rebecca Wright.