Computer Science Department, Boston University
111 Cummington St., Boston, MA 02215
EXPERTISE: Systems and Information Security and Fault-Tolerance, Applied Cryptography, Evolving/Self-Healing Crypto-systems, Secure Multicast, Digital Signatures, Key Management, Public Key Infrastructures and Certificate Management, Content Protection, Distributed Computation
1999 - present Boston University, Computer Science Department, Boston, MA
Assistant Professor. Perform research (see some follow up: over 950 citations) and teach in the areas Network Security, Applied Cryptography, and Secure Distributed Computing. Defined and supervised a number of graduate and undergraduate projects. Developed and taught a course in Network Security; also taught Cryptography, Algorithms, Topics in Internet Security, Intro to CS II, Combinatoric Structures, Algebraic Algorithms. Program committee member and reviewer for various conferences and journals.
for Research. The Center for Reliable Information Systems and
Cyber Security (RISCS).
Principal Investigator. NSF Trusted Computing grant: Systems Security Through Fault-Tolerant Cryptography.
2006 Massachusetts Institute of Technology, Cambridge, MA
Spring Visiting Scientist. Cryptography and Information Security Group.
1995 - 1999 NDS Technologies, Jerusalem, Israel
Senior Researcher. As a security expert, participated in the design and development of smart-card-based Conditional Access systems for pay-TV, including interactive systems and electronic commerce. Responsible for design and implementation of security features, provided consulting help to developers. Designed an ASIC generator of hard-to-analyze and hard-to-emulate functions. As Chief Architect of Content Protection, defined NDS strategy and lead efforts in this area. Represented NDS and prepared proposals for Copy Protection Technical Working Group (CPTWG), Digital Audio and Video International Council (DAVIC) and MPEG-4. In DAVIC, served as Vice-Chair, and led the Security and Content Protection work. Performed research in cryptography and computer security; developed new methods for Asymmetric Message Authentication, Secure Multicast over the Internet.
1995 Technion, Computer Science Department, Haifa, Israel
Postdoctoral Fellow. Research on distributed fault-tolerant computations and graph algorithms.
1993 Boston University, Computer Science Department, Boston, MA
Research Assistant. Research in the areas of distributed, parallel and fault-tolerant computations, VLSI, and graph algorithms. Studied modern cryptology and interactive (zero-knowledge) proofs.
1989 Charles River Analytics, Cambridge, MA
Summer Scientist. Investigated innovative approaches to software engineering using new results in theoretical computer science (in particular, Prof. M. Blum’s theory of program checkers) as a part of NASA sponsored study of fault-tolerant computations.
1988 Prime Computer Inc., Natick, MA
Summer Consultant. Developed UNIX tools for conversion, parallelization, optimization and testing of large scientific application packages. Developed partial prototypes for system re-design to include some learning capabilities. Developed enhancement for an expert system for financial applications.
1986 - 1993 Boston University, Boston, MA
Instructor. Taught graduate and undergraduate courses in Algorithms, Data Structures, Theory of Computation, Theory of Computer Languages, and Computational Complexity.
1986 Wentworth Institute Of Technology, Boston, MA
Instructor. Taught undergraduate course in Expert Systems.
1986 Computervision Corporation,
Software Engineer. Designed and implemented algorithms for 3-D geometric modeling. Developed new tools for CAD/CAM applications. Developed postprocessors for CAD/CAM NC packages.
EDUCATION: Boston University, Boston, MA
PhD in Computer Science, 1993
MS in Computer Science, 1987
Massachusetts Institute Of Technology, Cambridge, MA
BS in Physics, 1983
PERSONAL: US Citizen
REFERENCES: Available upon request
Self-Organizing Computing Media: Communication, Fault-Tolerance and Asynchrony in Parallel and Distributed Computation, Ph.D. thesis. Adviser: Prof. L. Levin.
G. Itkis and L. A. Levin.
Fast VLSI Models is Insensitive to Wires' Thinness,
IEEE Symposium on Foundations of Computer Science
Comparing VLSI models we show when the effect of wires' speed overtakes any effect of wires' width. We give the best possible (slightly super-linear) threshold at which all effects of wires' width vanish. These results can also be viewed in terms of routing.
G. Itkis and L. A. Levin.
Lean Self-Stabilizing Asynchronous Protocols,
IEEE Symposium on Foundations of Computer Science (FOCS), 1994. (also
We consider asynchronous general topology dynamic networks of identical nameless processors with worst-case transient faults. Our protocols require only constant space at each processor and, starting from any faulty configuration, self-stabilize any computation in (expected) time polynomial in the (unknown) network diameter. In particular, we present deterministic breadth-first forest and randomized leader election and breadth-first spanning tree self-stabilizing protocols, and give a robust characterization of the computational power of distributed networks.
G. Itkis, C. Lin and J. Simon.
Deterministic, Constant Space, Self-Stabilizing Leader Election on Uniform Rings,
International Workshop on Distributed Algorithms
(WDAG), LNS 972, 1995.
A deterministic self-stabilizing protocol (starting from any faulty configuration) for leader election is presented for uniform bi-directional rings with an (unknown) prime number of identical nameless processors of constant size (i.e. independent of the ring size). Randomization is known to be necessary for rings of non-prime size. Self-stabilizing leader election on unidirectional rings requires processors of logarithmic size.
S. Even, G.
Itkis and S. Rajsbaum.
On Mixed Connectivity Certificates,
Theoretical Computer Science
203(2), pp:253-269, 1998.
Also appears in Proceedings of the Third European Symposium on Algorithms (ESA, keynote address paper), LNS 979, pp:1-16, Springer-Verlag, 1995.
Vertex and edge connectivity are special cases of mixed connectivity, which treats equally edges and vertices from a specified set. A k-connectivity certificate is a spanning subgraph preserving the graph's connectivity up to k.We unify previous results on the certificates and extend them to handle mixed connectivity and arbitrary parallel edges. We also present a new distributed algorithm, finding such certificates, with better than the previously known time and communication complexities (in fact, the last is optimal).
G. Itkis, I.
Newman and A. Schuster.
Broadcasting on a Budget in the Multi-Service Communication Model,
International Conference on High Performance Computing (HiPC), 1998.
In this paper we introduce the multi-service model of network communication. This model attempts to capture recent communication technology trends, such as some aspects of quality-of-service and their relation to the emerging technology of automatic pricing, e.g. for Internet services. The multi-service model differs from related models by taking into account communication and service activation time, thus restricting parallelism to better fit reality. We apply this model to heterogeneous communication environment, which includes different communication media or connection providers, with respective pricing policies. We give some analysis and algorithms for optimal dissemination of information, given a limited budget.
R. Canetti, J. Garay, G. Itkis, D. Micciancio, M. Naor, and
Multicast Security: A Taxonomy and Some Efficient Constructions, IEEE INFOCOM 1999.
This paper presents a taxonomy of multicast scenarios with the relevant security concerns, and addresses two major security problems of multicast: source authentication, and key revocation. The approach of Asymmetric MACs (proposed in CRYPTO-96 rump-session presentation below) is further developed and applied to authenticating the multicast sources. An improvement of the previous solution for the key revocation problem is also presented. This paper is also the basis for a recent internet-draft, prepared for the Secure Multicast Group (SMuG) of the IRTF.
G. Itkis and L. Reyzin.
Forward Secure Signatures with Optimal Signing and Verifying,
Ordinary digital signatures have an inherent weakness: if the secret key is leaked, then all signatures, even the ones generated before the leak, are no longer trustworthy. Forward-secure digital signatures were recently proposed to address this weakness: they ensure that past signatures remain secure even if the current secret key is leaked.
We propose the first forward-secure signature scheme for which both signing and verifying are as efficient as for one of the most efficient ordinary signature schemes (Guillou-Quisquater): each requiring just two modular exponentiations with a short exponent. The previously proposed forward-secure signature schemes had significant cost overhead for signing and verifying, compared with ordinary schemes. Our scheme requires only fractional increases to the sizes of keys and signatures, and no additional public storage.
Gene Itkis and Leo Reyzin.
SiBIR: Signer-Base Intrusion-Resilient Signatures.
We propose a new notion of intrusion-resilient signature schemes, which generalizes and improves upon both forward-secure key-insulated signature schemes.
Specifically, as in the prior notions, time is divided into predefined time periods (e.g., days); each signature includes the number of the time period in which it was generated; while the public key remains the same, the secret keys evolve with time. Also, as in key-insulated schemes, the user has two modules, signer and home base: the signer generates signatures on its own, and the base is needed only to help update the signer's key from one period to the next.
The main strength of intrusion-resilient schemes, as opposed to prior notions, is that they remain secure even after arbitrarily many compromises of both modules, as long as the compromises are not simultaneous. Moreover, even if the intruder does compromise both modules simultaneously, she will still be unable to generate any signatures for the previous time periods. We provide an efficient intrusion-resilient signature scheme, provably secure in the random oracle model based on the strong RSA assumption.
We also discuss how such schemes can eliminate the need for certificate revocation in the case of on-line authentication.
Intrusion-Resilient Signatures: Generic Constructions, or Defeating Strong
Adversary with Minimal Assumptions. Conference on Security in
Communication Networks '02 (SCN'02), Amalfi, 2002.
This paper provides the first generic implementation, called gSiBIR, of the intrusion-resilient signature schemes: it can be based on any ordinary signature scheme used as a black-box. gSiBIR is also the first Signer-Base Intrusion-Resilient (SiBIR) scheme secure against fully-adaptive adversary and does not require a random oracle. Our construction does require one-way (and cryptographic hash) functions.
We also provide a mechanism extending gSiBIR (or other similar tree-based constructions) to avoid the limit on the total number of periods (required by the previous SiBIR and other schemes, including many forward-secure ones). With this mechanism, gSiBIR becomes a generic intrusion-resilient signature scheme with no limit on the number of periods. This mechanism is based on explicit use of prefixless (or self-delimiting) encodings and can be also applied to the previous generic forward-secure signature constructions, yielding some modest but noticeable improvements to their schemes.
Cryptographic Tamper Evidence. 10th ACM Conference on Computer
and Communications Security (CCS), 2003.
This paper proposes a new notion: cryptographic tamper evidence. A tamper-evident signature scheme includes a mechanism which can detect forger's activity and produce a proof that the secret keys have been exposed. In some contexts, it might even allow to continue functioning safely, even after the adversary has stolen your keys. At the very least, it can signal the key exposure.
More formally, a tamper evident signature scheme includes an additional procedure Div, which given two signatures can detect whether they were both generated by a legitimate signer, or one by the legitimate signer and the other by a forger. The surprising property is that this is possible even after the forger had exposed all the secrets in the system. In this case, it might, of course, be impossible to tell which signature is generated by the legitimate signer and which by the forger. But at least the fact of the tampering will be made evident.
Gene Itkis and Peng Xie.
Generalized Key-Evolving Signature Schemes, or How to Foil an Armed
Adversary. International Conference on Applied Cryptography
and Network Security (ACNS), 2003.
Key exposures, known or inconspicuous, are a real security threat. Recovery mechanisms from such exposures are required. For digital signatures such a recovery should ideally --and when possible-- include invalidation of the signatures issued with the compromised keys. We present new signature schemes with such recovery capabilities.
We consider two models for key exposures: full and partial reveal. In the first, a key exposure reveals all the secrets currently existing in the system. This model is suitable for the pessimistic inconspicuous exposures scenario. The partial reveal model permits the signer to conceal some information under exposure: e.g., under coercive exposures the signer is able to reveal a ``fake'' secret key.
We propose a definition of generalized key-evolving signature scheme, which unifies security against the coercive and inconspicuous key exposures with forward-security (these two problems were previously considered separately).
The new models also help us address repudiation problems inherent in the monotone signatures, and achieve performance improvements.
Nenad Dedic, Gene Itkis,
Leonid Reyzin, and Scott Russell.
Upper and Lower Bounds on Black-Box Steganography. Theory of
Cryptography Conference, 2005.
We study the limitations of steganography when the sender is not using any properties of the underlying channel beyond its entropy and the ability to sample from it. On the negative side, we show that the number of samples the sender must obtain from the channel is exponential in the rate of the stegosystem. On the positive side, we present the first secret-key stegosystem that essentially matches this lower bound regardless of the entropy of the underlying channel. Furthermore, for high-entropy channels, we present the first secret-key stegosystem that matches this lower bound statelessly (i.e., without requiring synchronized state between the sender and receiver).
Gene Itkis, Robert McNerney Jr., and
Intrusion-Resilient Secure Channels,
Applied Cryptography and Network Security (ACNS), 2005 (more complete
version appears in Cryptology ePrint Archive,
This paper defines a new secure communication primitive called an Intrusion-Resilient Channel (IRC) whose goal is to provide maximum damage containment and recovery capability from security compromises. We construct such channels using (as a black box) existing public key cryptosystems via an intuitive generic construction. The simplicity of the construction belies the significant technical challenges encountered in proving it secure.
We also outline a general strategy for increasing security of two-party protocols: given a protocol proven secure against adversaries with restricted access to the protocol messages, we use IRC to remove some of these restrictions on the adversaries. Once again, our intuitive approach turns out to be non-trivial to prove. We use this strategy to improve security of a previous intrusion-resilient signature scheme.
Forward Security: Adaptive Cryptography – Time Evolution.
Invited chapter for the Handbook of Information Security, John Wiley and Sons,
We survey the development of forward security and relate it to other concepts and trends in modern cryptography.
Ordinary digital signatures have an inherent weakness: if the secret key is leaked, then all signatures, even the ones generated before the leak, are no longer trustworthy. Forward-secure digital signatures were proposed to address this weakness: they ensure that past signatures remain secure even if the current secret key is leaked. Similarly for the case of ordinary encryption, an adversary that successfully exposed a secret key is typically able to expose even the old messages sent long before exposure. Forward-secure encryption ensures that the past messages are protected even if the current secret key is exposed.
We discuss forward security as a special case of key-evolving cryptography.
John Byers, Mei Chin Cheng, Jeffrey Considine,
Gene Itkis, Alex Yeung.
Securing Bulk Content Almost for Free.
Communications, special issue on Internet Security 29(3), pp:280-290, 2006.
Content providers often consider the costs of security to be greater than the losses they might incur without it; many view "casual piracy" as their main concern. Our goal is to provide a low cost defense against such attacks while maintaining rigorous security guarantees.
Our defense is integrated with and leverages fast forward error-correcting codes, such as Tornado codes, which are widely used to facilitate reliable delivery of rich content. We tune one such family of codes -- while preserving their original desirable properties -- to guarantee that none of the original content can be recovered whenever a key subset of encoded packets is missing. Ultimately we encrypt only these key codewords (only 4% of all transmissions), making the security overhead negligible.
Gene Itkis and Leonid A. Levin. Flat Holonomies on Automata Networks. An invited paper, International Symposium on Theoretical Aspects of Computer Science (STACS), pp:23-49, 2006.
We consider asynchronous dynamic networks of identical finite (independent of the network size) automata. A useful data structure on such networks is a partial orientation of its edges. It needs to be straight, i.e., have null holonomy (the difference between the number of up and down edges in each cycle). It also needs to be centered, have a unique node with no down edges. Using such orientation, any feasible computational task can be efficiently implemented with self-stabilization and synchronization. There are (interdependent) self-stabilizing asynchronous finite automata protocols assuring straight and centered orientation. Such protocols may vary in assorted efficiency parameters and it is desirable to have each replaceable with any alternative, responsible for a simple limited task. We describe an efficient reduction of any computational task to any such set of protocols compliant with our interface conditions.
Miscellaneous project reports and white papers