DIMACS Workshop on Intrinsic Complexity of Computation
April 10 - 13, 2000
DIMACS Center, CoRE Building, Busch Campus, Rutgers University, Piscataway, NJ
- Paul Beame, University of Washington, beame@cs.washington.edu
- Steven Rudich, Carnegie Mellon University, rudich+@cs.cmu.edu
- Andrew Yao, Princeton University, yao@cs.princeton.edu
Presented under the auspices of the DIMACS Special Year on Computational Intractability.
ABSTRACTS
1.
Peter Bro Miltersen
Upper and lower bounds for locally decodable source codes.
A locally decodable source code encodes a bitstring as a
short codeword (of length as close to the first order
entropy of the original string as possible), so that any
individual bit of the original string can be retrieved
with high confidence by looking at a single bit (chosen
according to a randomized strategy) of the codeword. We
give upper and lower bounds for the lengths of locally
decodable source codes. The lower bounds are based on lower
bounds for k-cover free set systems.
Joint work with H. Burhman, J. Radhakrishnan, and S. Venkatesh.
2.
Richard Beigel
Lower Bounds for Approximations by Low Degree Polynomials Over Z_m
Smolensky (STOC 87) used a dimension argument to prove that a
degree-o(sqrt{n}) polynomial over Z_p (p an odd prime)
must differ from the parity function on at least a constant fraction
of all points in the Boolean n-cube. He later (FOCS 93) used
Hilbert functions to improve that result to a 1/2-o(1) fraction.
Goldmann (IPL 95) proved that a linear function over Z_m (m any odd
number) must differ from the parity function on at least a
1/2-1/exponential fraction of all points.
We provide the first lower bounds for approximations over Z_m by
nonlinear polynomials:
- A degree-2 polynomial over Z_m (m odd) must differ
from the parity function on at least a 1/2-1/n^Omega(1) fraction
of all points in the Boolean n-cube.
- A degree-O(1) polynomial over Z_m (m odd) must
differ from the parity function on at least a 1/2-o(1) fraction of
all points in the Boolean n-cube.
3.
Paul Beame
Super-linear time-space tradeoff lower bounds for randomized computation
We prove the first time-space lower bound tradeoffs for randomized computation
of decision problems where computation error is allowed. Our techniques
are extensions of those use by Ajtai (STOC 99, FOCS 99) in his time-space
tradeoffs for zero-error deterministic RAM algorithms computing element
distinctness and for Boolean branching programs computing a natural quadratic
form.
Ajtai's bounds were of the following form: if the machine uses at most kn time
for some constant k, it requires space at least epsilon_k n for some contant
epsilon_k. We provide explicit relationships between k and epsilon_k which
improve the relationships implicit in Ajtai's arguments. In particular we
obtain non-trivial time-space tradeoffs for substantially superlinear time
bounds.
Joint work with Mike Saks, Xiaodong Sun, and Erik Vee
4.
Eli Ben-Sasson
Near-Optimal Separation of Treelike and General Resolution
We present the best known separation
between tree-like and general resolution.
This is done by constructing a natural family of contradictions, of
size n, that have O(n)-size resolution
refutations, but only exp(Omega(n/log n))-size tree-like
refutations.
This result implies that the most commonly used automated theorem
procedures, which produce tree-like resolution refutations,
will perform badly of some inputs, while other simple procedures,
that produce general resolution refutations,
will have polynomial runtime on these very same inputs.
We show, furthermore that the gap we present is nearly optimal.
Specifically, if S (S_T) is the
minimal size of a (tree-like) refutation,
we prove that S_T = exp(O(S loglog S / log S)).
Joint work with Russell Impagliazzo and Avi Wigderson.
5.
Ingo Wegener
On the necessary amount of nondeterminism in certain models
of branching programs
Nondeterminism is a useful concept for branching programs.
Many lower bound techniques work for deterministic branching
program variants as well as for nondeterministic ones. It is
proved for some models that some functions are representable
in polynomial size with a certain amount of nondeterminism
but not with less nondeterminism.
6.
Martin Sauerhoff
On Probability-Amplification for Read-Once Branching Programs
By probability-amplification, we mean a general (meta-)algorithm which, given
a randomized algorithm A with error probability epsilon and an
additional constant epsilon' as input, produces a new randomized algorithm A'
for the same problem with error probability at most epsilon',
such that the complexity of A' is ``not much larger'' than the
complexity of A. It is well-known that such probability-amplification
algorithms exist for probabilistic Turing machines and with respect to
polynomial time computability. On the non-uniform side, the same can be said
for randomized (general) branching programs (BPs) and for randomized OBDDs
(ordered read-once BPs) with respect to representation in polynomial size.
In the talk, it is shown that at least a general probability-amplification
algorithm does not exist for read-once BPs. The talk deals with a
function called "ModSum" defined on Boolean matrices, which expresses a
property of such matrices based on calculations in finite fields. The
following can be shown for this function:
- "ModSum" can be represented by randomized read-once BPs of polynomial size
with with two-sided error 1/3;
- "ModSum" has exponential size for randomized read-once BPs with constant
two-sided error smaller than 1/4.
Furthermore, it is easy to see that the complement of "ModSum" can be
represented by randomized read-once BPs with one-sided error 1/2 in polynomial
size. On the other hand, the size for the complement of "ModSum" goes up
exponentially as soon as the (one-sided) error is required to be smaller than
1/2.
Finally, the above results separate the analog of the complexity class NP for
read-once BPs and "BPP for error probabilities smaller than 1/4".
This is (still) the strongest separation between NP and BPP for read-once BPs
known so far.
7.
Rafail Ostrovsky
Computational PCP: Short One-Round Proofs for NP
Joint work with Bill Aiello, Sandeep Bhatt and S. Rajagopalan
8.
Rudiger Reischuk
Relationship between SAT and Dynamic Scheduling
One of the most fundamental tasks concerning
parallel and distributed programs and systems
is the design of efficient multiprocessor scheduling algorithms.
We introduce a new model for parallel
and distributed programs called dynamic precedence graphs
that in a succinct way represents all possible executions of the program.
Such graphs embed constructors for parallelism,
synchronization mechanisms and conditional branches.
With respect to such a compact representation we investigate the complexity
of different aspects of the scheduling problem: the question whether a legal
schedule exists at all and how to find an optimal schedule.
Our analysis takes into account communication delays between processors
exchanging data.
Precise characterizations of the computational complexity of several
variants of this compact scheduling problem are given.
We exhibit tight relationships to different versions of the satisfiability
problem for Boolean formulas and circuits.
The complexity results range from easy, that is NL-complete,
to very hard, namely NEXPTIME-complete.
Joint work with Andreas Jakoby and Maciej Liskiewicz
Other Workshops
DIMACS Homepage
Contacting the Center
Document last modified on April 7, 2000.