Location: TBD

**Organizers:****Kurt Rohloff**, NJIT, rohloff at njit.edu**Greg Shannon**, CMU, shannon at cert.org**W. Konrad Vesey**, Elkridge Security, konrad at elkridgesecurity.com

The best cyber technologies provide strong guarantees on security and privacy properties and require substantial computation to establish a property or to break a property. For example, system verification techniques require exhaustive or deep searches in very large state spaces, and encryption technologies are built around computational hardness assumptions for brute-force decryption. This principle of substantial computation for security and privacy applies even in operational settings such as finding vulnerabilities and data leaks in open-source software or detecting zero-day exploits by analyzing data from an enterprise's host and network monitors.

Although computational capability is important, limitations of other resources, such as memory, electrical power, bandwidth, etc. also impact our ability to provide cyber security. For example, non-computational resources such as memory, bandwidth and electrical power limitations also provide protection against compromising encryption-based technologies in distributed environments, such as for Secure Multi-party Computation (SMC) and signature systems. Additionally, because verification of system-level properties is computationally intensive, requiring the use of high-performance computation, this invoking similarly imposes resource constraints on the ability to give "proofs" of system security.

Recent improvements in security and privacy algorithms have resulted in "orders of magnitude" reductions in the computation required to ensure properties; similar results exist for more efficient methods to compromise security and privacy properties.
While computational efficiency is required, evidence suggests that energy is the fundamental limitation in practical large-scale computations, especially where the computations need to be completed in a reasonable amount of time (for a fixed-rate of computation). ^{1}

Finding: With power, space, and cooling as forcing functions, computational power efficiency will become ever more important. Application-specific solutions, on which the IC has historically relied for a leg up, will be even more important for the computationally hard problems of the future. Maximum efficiencies will come from the right choice of computational style and platform.

Of particular interest is energy that dominates costs in large computations. Some parallel algorithms might be very poor at non-computational resource allocation due to data movement and communications. Open questions include whether considering non-computational resources change the balance of security enough to justify traditional security assumptions or, are additional investments in energy efficient computation justified for verifying cyber-security?

This workshop will consider these problems and tradeoffs from a system design point of view. We will also consider whether energy-optimization techniques create side channels or limit verification of cyber-security and -privacy. Previous DIMACS workshops have shown that computation on meta-data can reveal sensitive information. From a hardware/infrastructure perspective, many of field-deployable lower-level energy efficiency techniques make a floating point assumption (like on GPU's or even some HDL designs). Even for massive Hadoop/MapReduce-focused datacenters, there's a comparable assumption that not all bits need to be accounted for, because "real" data is messy. Again, this doesn't hold for crypto and cryptanalysis. The goal of this workshop is to consider how the results from these two communities can be combined and extended to create energy-efficient algorithms that enable cyber security and privacy theoretically, algorithmically, and practically.

TOPICS OF DISCUSSION

- What models of energy-efficiency (EE) should cyber security researchers consider?
- What current encryption technologies are most resistant to cryptanalysis based on electrical power availability or lack thereof?
- Would approaches such as theorem provers and linear programming technologies provide multiple orders of magnitude improvements over (current?) energy-ignorant algorithms?
- Can we state lower bounds on energy effort to find counter examples when a given amount of computation has failed to find a counter example so far?
- Computing models and their energy efficiency versus effectiveness tradeoffs for cybersecurity (e.g. mismatch between GPUs and lattice encryption schemes such as FHE).
- Can assured computations be significantly more energy efficient? e.g., FHE, verified computation, obfuscated computation, privacy-protecting databases and queries, etc.?
- In cyber modeling and simulation, what are the EE issues and how might EE help in performance?

Next: Call for Participation

Workshop Index

DIMACS Homepage

Contacting the Center

Document last modified on May 15, 2015.