Co-sponsored by DIMACS and Microsoft Corporation.
Presented under the auspices of the DIMACS Special Focus on Next Generation Networks Technologies and its Applications and the DIMACS Special Year on Networks.
1. Josh Benaloh, Microsoft Research Efficient Distribution of Fingerprinted Content One method of fingerprinting protected content used by Boneh and Shaw as well as others is to divide the content into "clips" and make two or more copies of each clip -- each containing a distinct mark. Each recipient of the protected content would then receive an individualized sequence of clips that could later be used to identify the recipient from any released content. One distribution method for thusly marked content is to encrypt each copy of each clip with its own key and distribute the set of all copies of all clips over a common medium such as a CD, DVD, or broadcast channel. This reduces the problem of distributing individualized content to the problem of distributing individualized key sets. However, if the number of clips is large, even these key sets can be impractical to distribute individually. This talk will describe a method wherein each of these key sets can effectively be compressed into the size of a single key. In some contexts, this reduced key set size enables certain distribution options that would otherwise be precluded. For example, a single DVD could contain protected content and all keys necessary for individually fingerprinted copies of the content to be retrieved by a million individuals.Other Workshops
2. Jonathan D. Callas, Counterpane Internet Security and Bruce Schneier, Counterpane Internet Security The Effect of Anti-Circumvention Provisions on Security One of the properties of digital Intellectual Property (IP) is that it can be easily reproduced, modified, and transferred. In response, IP owners have created creating new security technologies for controlling the digital works. Inevitably, this creates an opportunity for those who can circumvent those technologies. Recent changes in copyright law attempt to address this on-going battle by prohibiting circumvention of these technologies. Unfortunately, this well-meaning provision does not tie the circumvention to infringement; circumvention is prohibited even if the technological measure prevent someone from something they are entitled to do. This leads to a number of unfortunate effects on development of security systems, including techniques that protect intellectual property itself. 3. Christian Collberg, University of Arizona and Clark Thomborson, University of Auckland Watermarking, Tamper-Proofinf and Obfuscation: Tools for Software Protection We identify three types of attack on the intellectual property contained in software, and three corresponding technical defenses. A defense against reverse engineering is obfuscation, a process that renders software unintelligible but still functional. A defense against software piracy is watermarking, a process that makes it possible to determine the origin of software. A defense against tampering is tamper-proofing, so that unauthorized modifications to software (for example to remove a watermark) will result in non-functional code. We briefly survey the available technology for each type of defense.
4. Drew Dean, Xerox PARC Divx, DPRM, and SDMI Divx was the first consumer product based on controlled use of digital content. Its cancellation suggests that while the technology is promising, there are a number of issues left in designing digital property right management systems that will have widespread success in the consumer market. This talk examines technical and policy factors that influence consumer acceptance of digital property rights management technology. Technical and legal aspects of the security of these systems are also discussed.
5. Kaoru Kurosawa, Tokyo Institute of Technology, Mike Burmester,
Royal Holloway, Yvo Desmedt, University of Florida A proven secure tracing algorithm for the optimal KD traitor tracing scheme In this paper, we present a proven secure black box tracing algorithm for Kurosawa-Desmedt one-time traceability scheme of Eurocrypt'98. It will trace not only the traitors who use the Stinson-Wei/Boneh-Franklin attack but also any traitors. Our result implies that the lower bounds of Kurosawa and Desmedt are tight and the scheme is optimum.
6. Glenn Durfee, Stanford University Distribution Chain Security Digital content distribution systems will enable business models in the near future that cannot be predicted today. In this paper, we identify a new security problem that can be crucial to this enablement. The problem arises from the conflicting privacy and integrity goals of middlemen in digital distribution chains. We present a cryptographic solution based on commitment schemes and zero-knowledge proofs of arithmetic relations. Our implementation and timing experiments demonstrate that our solution is practical and efficient.
7. Juan Garay, Jessica Staddon and Avishai Wool, Bell Labs Long-Lived Broadcast Encryption In a broadcast encryption scheme, digital content is encrypted to ensure that only privileged users can recover the content from the encrypted broadcast. Key material is usually held in a ``tamper-resistant,'' replaceable, smartcard. A coalition of users may attack such a system by breaking their smartcards open, extracting the keys, and building ``pirate decoders'' based on the decryption keys they extract. In this talk we suggest the notion of {\em long-lived broadcast encryption} as a way of adapting broadcast encryption to the presence of pirate decoders and maintaining the security of broadcasts to privileged users while rendering all pirate decoders useless. Long-lived broadcast encryption schemes are a more comprehensive solution to piracy than traitor tracing schemes, because the latter only seek to identify the makers of pirate decoders and don't deal with how to maintain secure broadcasts once keys have been compromised. When a pirate decoder is detected in a long-lived encryption scheme, the keys it contains are viewed as compromised and are no longer used for encrypting content. We demonstrate that although a broadcast encryption scheme may only be designed to allow any set of $m$ users to be excluded, it can tolerate a high number of compromised cards (in addition to the $m$ excluded cards) before any recarding of users is necessary. In addition, we provide both empirical and theoretical evidence that that there is a long-lived broadcast encryption scheme which achieves a steady state in which only a small fraction of cards need to be replaced in each epoch. That is, for any fraction $\beta$, we can choose the total number of keys $K$, such that eventually at most $\beta$ of the cards must be replaced in each epoch.
8. Stuart Haber, InterTrust STAR Lab Digital Rights Management: Research Questions and Practical Implementations The newly widespread availability of information in digital form raises a number of interesting questions about how to design a system for commercial transactions involving such information. In the non-digital world, intellectual-property regulations governed (and govern) many aspects of access to information; it remains to be seen how this will transfer to the digital world. After discussing the requirements that one might desire of a system for commerce in digital intellectual property, the speaker will describe the architecture of one such system, the one that is now deployed by InterTrust Technologies. The speaker will also describe several research problems that arise in the design of a system for digital rights management, including problems in computer security, cryptography, language design, business models, game theory, and distributed systems. Some of these problems have partial solutions, and some of them are wide open. Adequate solutions to some of these problems may enable new sorts of information transactions, in addtion to enabling digital analogues of current practice in the physical world.
9. Gregory L. Heileman and Carlos E. Pizano, University of New Mexico and Elisar Software Corporation Copy-Protection Policies for Digital Images We survey the major technologies that are being proposed for protecting on-line image content. These include digital watermarking (both visible and invisible), client-side copy disabling (both at the device driver and browser levels), and secure image transfer protocols. Our discussion will address the ease of use and the level of security provided by each of these, along with the assumptions implicit in each approach. Furthermore, we will discuss the specific points in the intellectual property management "pipeline" where each of these technologies is likely to prove effective. This will reveal a number of common misconceptions that currently surround their use. Finally, we consider a specific usage model for conducting image-related commerce, where image content must be made globally viewable to all who access the Internet. This model, which is currently in widespread use, offers particular difficulties to the intellectual property management process. These difficulties will be considered, along with the trade-offs associated with using particular combinations of the aforementioned technologies to enforce image copyright and usage terms under this model.
10. Mariusz Jakubowski and Ramarathnam Venkatesan, Microsoft Research Image Hashing We present new algorithms for hashing, or one-way image compression, and comparison of bitmapped images. Our methods are based on random multiscale subdivision of images into regions, randomized rounding of intensity averages in those regions, and robust compression of the resulting vectors by error correction. As hashes useful for robust identification and comparison of images, the compressed vectors can replace watermarks. Our schemes work with images subjected to common distortions, including scanning, resizing, and resampling. Additionally, image hashes withstand anti-watermark transformations performed by software such as StirMark and unZign.
11. Clifford Lynch, Coalition for Networked Information, Joan Feigenbaum, AT&T Labs Summary of the National Research Council's "Digital Dilemma" report: Findings, non-findings, and implications for the technical agenda. In November 1999, the National Research Council released a report entitled "The Digital Dilemma: Intellectual Property in the Information Age" that took a wide-ranging look at copyright in the digital environment. Both Lynch and Feigenbaum served on the committee that authored the report. Among other things, the report looks carefully at areas in which technology interacts with, and potentially restructures, markets for and uses of copyrighted works. Close attention was paid to both the potential pluses and the potential minuses of "technological protection mechanisms" for digital works. This workshop session has two goals. First, it will provide an overview of the findings of the report, including reccomendations for public policy, legislative action (or the lack thereof), and research. These form an important context for future research and development in protection mechanisms and other relevant technologies. Second, the session is intended to provide a discussion forum and to begin consideration of how the technical community can move forward from here. The NRC report contains mostly general findings, and the speakers would like to encourage workshop participants to examine the implications of these findings and to try to make them more precise with the goal of good technological evolution in mind. For example, some key legal concepts in copyright that play important public policy roles (notably "fair use") may be technologically infeasible to implement in the digital world. What are the alternatives, and how might fair use be recast and preserved? As another example, the report speaks of matching appropriate technological measures to appropriate, complementary business models. While this seems like something with which no reasonable person could argue, we don't have a good methodology for applying it to specific works and specific business plans. This suggests the possibility of good, new research directions. Following a summary of the report, the presenters will serve as panelists to frame a series of these questions and lead the audience in discussion of implications and proposed approaches.
12. Andrew Odlyzko, AT&T Labs - Research Stronger copyright protection for cyberspace: desirable, inevitable, and irrelevant. Major revisions of copyright laws are being enacted around the world in response to strong demand from publishers, movie studios, and other "content providers." These revisions substantially restrict many traditional rights of users, such as the "first sale" doctrine. The push to strengthen copyright protection is resisted by scholars and librarians, who fear drastic curtailments in public access to information. This essay argues that the the outcome of the ongoing battle will not matter much, and that specific provisions of copyright laws will be of minor importance. Technological, economic, and sociological factors will be the primary determinants of how information goods are sold. Content producers are likely to get stronger legal protection. However, they will probably find that they won't need it. On one hand, effective electronic commerce will make customizable contracts much easier to arrange, so that content providers will be able to obtain any restrictions they wish through contract law. On the other hand, competition will make tight restrictions inadvisable. Netscape did not make its source code public because of any deficiencies in copyright laws, after all. Reduced barriers to entry and reduced costs do bring about a much more competitive market for most information goods. Stronger copyright protection is desirable, primarily because it would enable a greater degree of price discrimination by content providers. It is also inevitable for several reasons. At one level, it reflects the political influence of content providers, which is visible in the rash of legislation around the world. At a more basic level, it reflects the needs of an information economy. The market for scholarly information is small. Even the trade press is not very large. The total of about $20 billion per year in book sales in the U.S. pales with the revenues of software companies, and especially with the total IT sector of the economy, which is estimated to be around $600 billion per year. The effective functioning of this much larger sector requires allowing flexible contractual arrangements. Therefore the government has limited options in using its power to decide which contracts to enforce, especially since in an online environment content producers will have great flexibility in packaging their goods. (The current antitrust lawsuit may establish that Internet Explorer is not an integral part of Windows 98, but it is clear that Microsoft can, if it chooses, make it an integral part of a later operating system.)
13. Defense Against Man-in-the-middle Attack in Client-Server Systems with Secure Servers by R.J. Lipton and D.N. Serpanos Abstract The deployment of several client-server applications over the Internet and emerging networks requires the establishment of the client's integrity. This is necessary for the protection of copyright of distributed material. Clients are vulnerable to powerful man-in-the-middle attacks through viruses, which are undetectable by conventional anti-virus technology. We describe such powerful viruses and show their ability to lead to compromised clients, that cannot protect copyrighted material. We introduce a methodology based on simple hardware devices, called ``spies'', which enables servers to establish the integrity of clients, and leads to a successful defense against viruses that use man-in-the-middle attacks.
14. Narayanan ("Shiva") Shivakumar, Gigabeat Large scale copy detection Currently, any small time cyber-pirate can make copies of music CDs and books available on the web in digital format to a large audience at virtually no cost. Content publishers such as Disney and Sony Records are therefore expected to lose several billions of dollars over the next few years in copyright revenues. To address this problem, we propose building a copy detection system (CDS), where content publishers will register their valuable digital content. The CDS then crawls the web, compares the web content to the registered content and notifies the content owners of illegal copies. In my talk, I will discuss how to build such a system so it is accurate, scalable (e.g., to hundreds of gigabytes of data, or millions of web pages) and resilient to "attacks" (e.g., partial audio clips) from cyber-pirates. I will also discuss three prototype CDS I have built as "proofs of concept": (1) SCAM (Stanford Copy Analysis Mechanism), (2) FRAUD (Finding Replicas of AUDio), and (3) DECIV (DEtecting CopIes of Video) for finding textual documents, audio clips and video clips on the web.
15. Barbara Simons, ACM Intellectual Property in the Information Age: Will Laws and Technology Destroy Public Libraries? A few years ago Hollywood and the music industry discovered the Internet and realized, much to their horror, that the technology now exists to make arbitrary numbers of perfect copies of a digitized object. As a result, we have seen an explosion of legislative and treaty proposals. For example, the Digital Millenium Copyright Act, passed in 1998, attempts to outlaw devices and technologies that can be used to bypass copyright controls. This legislation has several bad features, among them the unintended side effect of making some legitimate computer security research illegal. It could even criminalize some techniques, such as reverse engineering, that were required to correct Y2K problems. Both the legislation that is passed and the manner in which technology is implemented will have a major impact on the rights and responsibilities of creators and users of intellectual property. How will copyright be impacted? Will new laws and new technologies that protect intellectual property eliminate user rights of fair use and first sale? Are we moving from copyright protection of books and magazines on the net to contract law, and if so, what are the potential repercussions? The manner in which these questions are resolved will have a significant impact on our society.
16. Yacov Yacobi, Microsoft Research Passive Fingerprinting We improve on the Boneh-Shaw Fingerprinting scheme in two ways: (i) We merge a Direct Sequence Spread Spectrum (DSSS) embedding layer with the first Boneh-Shaw layer (the so called ``$\Gamma$ code"), effectively increasing the protected object size by about four orders of magnitude. As a result we have more than one order of magnitude improvement on the size of collusions that we can overcome. (ii) We replace the ``marking assumption'' with a more realistic assumption, allowing random jamming on the so called ``unseen'' bits. Key Words: Watermarks, Fingerprints, Tracing Traitors, Anti-piracy, Intellectual Property Protection, Collusion-resistance.