[RZ21] 
Franchised Quantum Money
ASIACRYPT 2021
[PDF]
Quantum computers, which harness the power of quantum physics, offer the potential for new kinds of security. For instance, classi cal bits can be copied,
but quantum bits, in general, cannot. As a result, there is interest in creating uncounterfeitable quantum money, in which a set of qubits can be spent as
money but cannot be duplicated. However existing constructions of quantum money are limited: the verification key, which is used to verify that a banknote
was honestly generated, can also be used to create counterfeit banknotes. Recent attempts have tried to allow public key verification, where any untrusted
user, even a wouldbe counterfeiter, can verify the banknotes. However, despite many attempts, a secure construction of publickey quantum money has remained
elusive.
Here we introduce franchised quantum money, a new notion that is weaker than public key quantum money but brings us closer to realizing it.
Franchised quantum money allows any untrusted user to verify the banknotes, and every user gets a unique secret verification key. Further more, we give a
construction of franchised quantum money and prove security assuming the quantum hardness of the shortinteger solution problem (SIS). This is the first
construction of quantum money that al lows an untrusted user to verify the banknotes, and which has a proof of security based on widespread assumptions.
It is therefore an important step toward public key quantum money.
@inproceedings{RZ21, author = {Bhaskar Roberts and Mark Zhandry}, title = {Franchised Quantum Money}, booktitle = {Proceedings of ASIACRYPT 2021}, misc = {Full version available at \url{}}, year = {2021} }

[Zha21b] 
Redeeming Reset Indifferentiability and PostQuantum Groups
ASIACRYPT 2021
[PDF]
[ePrint]
Indifferentiability is used to analyze the security of constructions of idealized objects, such as random oracles or ideal ciphers. Reset indifferentiability is
a strengthening of plain indifferentiability which is applicable in far more scenarios, but is often considered too strong due to significant impossibility results.
Our main results are:
• Under weak reset indifferentiability, ideal ciphers imply (fixed size) random oracles and random oracle domain shrinkage is possible. We thus show that reset
indifferentiability is more useful than previously thought.
• We lift our analysis to the quantum setting showing that ideal ciphers imply random oracles under quantum indifferentiability.
• Despite Shor`s algorithm, we observe that generic groups are still meaningful quantumly, showing that they are quantumly (reset) indifferentiable from ideal
ciphers; combined with the above, cryptographic groups yield postquantum symmetric key cryptography. In particular, we obtain a plausible postquantum random oracle
that is a subsetproduct followed by two modular reductions.
@inproceedings{Zha21b, author = {Mark Zhandry}, title = {Redeeming Reset Indifferentiability and PostQuantum Groups}, booktitle = {Proceedings of ASIACRYPT 2021}, misc = {Full version available at \url{https://eprint.iacr.org/2021/288}}, year = {2021} }

[GZ21] 
Disappearing Cryptography in the Bounded Storage Model
TCC 2021
[PDF]
[ePrint]
In this work, we study disappearing cryptography in the bounded storage model. Here, a component of the transmission, say a ciphertext, a digital signature, or
even a program, is streamed bit by bit. The stream is so large for anyone to store in its entirety, meaning the transmission effectively disappears once the stream
stops.
We first propose the notion of online obfuscation, capturing the goal of disappearing programs in the bounded storage model. We give a negative result for VBB security
in this model, but propose candidate constructions for a weaker security goal, namely VGB security. We then demonstrate the utility of VGB online obfuscation, showing
that it can be used to generate disappearing ciphertexts and signatures. All of our applications are NOT possible in the standard model of cryptography, regardless of
computational assumptions used.
@inproceedings{GZ21, author = {Jiaxin Guan and Mark Zhandry}, title = {Disappearing Cryptography in the Bounded Storage Model}, booktitle = {Proceedings of TCC 2021}, misc = {Full version available at \url{https://eprint.iacr.org/2021/406}}, year = {2021} }

[CMSZ21] 
PostQuantum Succinct Arguments
FOCS 2021
[PDF]
[ePrint]
We prove that Kilian`s fourmessage succinct argument system is postquantum secure in the standard model when instantiated with any probabilistically checkable proof
and any collapsing hash function (which in turn exist based on the postquantum hardness of Learning with Errors).
At the heart of our proof is a new "measureandrepair" quantum rewinding procedure that achieves asymptotically optimal knowledge error.
@inproceedings{CMSZ21, author = {Alessandro Chiesa and Fermi Ma and Nicholas Spooner and Mark Zhandry}, title = {PostQuantum Succinct Arguments}, booktitle = {Proceedings of FOCS 2021}, misc = {Full version available at \url{https://eprint.iacr.org/2021/334}}, year = {2021} }

[YZ21] 
Classical vs Quantum Random Oracles
EUROCRYPT 2021
[PDF]
[ePrint]
In this paper, we study relationship between security of cryptographic schemes in the random oracle model (ROM) and quantum random oracle model (QROM).
First, we introduce a notion of a proof of quantum access to a random oracle (PoQRO), which is a protocol to prove the capability to quantumly access a
random oracle to a classical verifier. We observe that a proof of quantumness recently proposed by Brakerski et al. (TQC `20) can be seen as a PoQRO.
We also give a construction of a publicly verifiable PoQRO relative to a classical oracle. Based on them, we construct digital signature and public key
encryption schemes that are secure in the ROM but insecure in the QROM. In particular, we obtain the first examples of natural cryptographic schemes that
separate the ROM and QROM under a standard cryptographic assumption.
On the other hand, we give lifting theorems from security in the ROM to that in the QROM for certain types of cryptographic schemes and security notions.
For example, our lifting theorems are applicable to FiatShamir noninteractive arguments, FiatShamir signatures, and FullDomainHash signatures etc.
We also discuss applications of our lifting theorems to quantum query complexity.
@inproceedings{YZ21, author = {Takashi Yamakawa and Mark Zhandry}, title = {Classical vs Quantum Random Oracles}, booktitle = {Proceedings of EUROCRYPT 2021}, misc = {Full version available at \url{https://eprint.iacr.org/2020/1270}}, year = {2021} }

[CLZ21] 
Quantum Algorithms for Variants of AverageCase Lattice Problems via Filtering
[PDF]
[ePrint]
We show polynomialtime quantum algorithms for the following problems:
• Short integer solution (SIS) problem under the infinity norm, where the public matrix is very wide, the modulus is a polynomially large prime,
and the bound of infinity norm is set to be half of the modulus minus a constant.
• Extrapolated dihedral coset problem (EDCP) with certain parameters.
• Learning with errors (LWE) problem given LWElike quantum states with polynomially large moduli and certain error distributions, including bounded
uniform distributions and Laplace distributions.
The SIS, EDCP, and LWE problems in their standard forms are as hard as solving lattice problems in the worst case. However, the variants that we can solve
are not in the parameter regimes known to be as hard as solving worstcase lattice problems. Still, no classical or quantum polynomialtime algorithms were
known for those variants.
Our algorithms for variants of SIS and EDCP use the existing quantum reductions from those problems to LWE, or more precisely, to the problem of solving
LWE given LWElike quantum states. Our main contributions are introducing a filtering technique and solving LWE given LWElike quantum states with
interesting parameters.
@misc{CLZ21, author = {Yilei Chen and Qipeng Liu and Mark Zhandry}, title = {Quantum Algorithms for Variants of AverageCase Lattice Problems via Filtering}, misc = {Full version available at \url{https://eprint.iacr.org/2021/1093}}, year = {2021} }

[ALL^{+}21] 
New Approaches for Quantum CopyProtection
CRYPTO 2021
[PDF]
[ePrint]
Quantum copy protection uses the unclonability of quantum states to construct quantum software that provably cannot be pirated. Copy protection would be
immensely useful, but unfortunately little is known about how to achieve it in general. In this work, we make progress on this goal, by giving the
following results:
• We show how to copy protect any program that cannot be learned from its input/output behavior, relative to a classical
oracle. This improves on Aaronson [CCC’09], which achieves the same relative to a quantum oracle. By instantiating the oracle with postquantum candidate
obfuscation schemes, we obtain a heuristic construction of copy protection. • We show, roughly, that any program which can be watermarked can be
copy detected, a weaker version of copy protection that does not prevent copying, but guarantees that any copying can be detected. Our scheme relies on
the security of the assumed watermarking, plus the assumed existence of public key quantum money. Our construction is general, applicable to many recent
watermarking schemes.
@inproceedings{ALLZZ21, author = {Scott Aaronson and Qipeng Liu and Jiahui Liu and Mark Zhandry and Ruizhe Zhang}, title = {New Approaches for Quantum CopyProtection}, booktitle = {Proceedings of CRYPTO 2021}, misc = {Full version available at \url{https://eprint.iacr.org/2020/1339}}, year = {2021} }

[CLLZ21] 
Hidden Cosets and Applications to Unclonable Cryptography
CRYPTO 2021
[PDF]
[ePrint]
In 2012, Aaronson and Christiano introduced the idea of hidden subspace states to build publickey quantum money [STOC `12]. Since then, this idea has
been applied to realize several other cryptographic primitives which enjoy some form of unclonability. In this work, we study a generalization of hidden
subspace states to hidden coset states. This notion was considered independently by Vidick and Zhang [Eurocrypt `21], in the context of proofs of quantum
knowledge from quantum money schemes. We explore unclonable properties of coset states and several applications:
• We show that, assuming indistinguishability obfuscation (iO), hidden coset states possess a certain direct product hardness property, which immediately
implies a tokenized signature scheme in the plain model. Previously, a tokenized signature scheme was known only relative to an oracle, from a work of BenDavid
and Sattath [QCrypt `17].
&bulll; Combining a tokenized signature scheme with extractable witness encryption, we give a construction of an unclonable decryption scheme in the plain model.
The latter primitive was recently proposed by Georgiou and Zhandry [ePrint `20], who gave a construction relative to a classical oracle.
• We conjecture that coset states satisfy a certain natural (informationtheoretic) monogamyofentanglement property. Assuming this conjecture is true, we
remove the requirement for extractable witness encryption in our unclonable decryption construction, by relying instead on computeandcompare obfuscation for
the class of unpredictable distributions. As potential evidence in support of the monogamy conjecture, we prove a weaker version of this monogamy property, which
we believe will still be of independent interest.
• Finally, we give a construction of a copyprotection scheme for pseudorandom functions (PRFs) in the plain model. Our scheme is secure either assuming iO,
OWF and extractable witness encryption, or assuming iO, OWF, computeandcompare obfuscation for the class of unpredictable distributions, and the conjectured
monogamy property mentioned above. This is the first example of a copyprotection scheme with provable security in the plain model for a class of functions that
is not evasive.
@inproceedings{CLLZ21, author = {Andrea Coladangelo and Qipeng Liu and Jiahui Liu and Mark Zhandry}, title = {Hidden Cosets and Applications to Unclonable Cryptography}, booktitle = {Proceedings of CRYPTO 2021}, misc = {Full version available at \url{https://eprint.iacr.org/2021/946}}, year = {2021} }

[Zha21a] 
White Box Traitor Tracing
CRYPTO 2021
[PDF]
[ePrint]
Traitor tracing aims to identify the source of leaked decryption keys. Since the "traitor" can try to hide their key within obfuscated code in order to evade
tracing, the tracing algorithm should work for general, potentially obfuscated, decoder programs. In the setting of such general decoder programs, prior
work uses black box tracing: the tracing algorithm ignores the implementation of the decoder, and instead traces just by making queries to the decoder
and observing the outputs.
We observe that, in some settings, such black box tracing leads to consistency and user privacy issues. On the other hand, these issues do not appear inherent
to white box tracing, where the tracing algorithm actually inspects the decoder implementation. We therefore develop new white box traitor tracing schemes
providing consistency and/or privacy. Our schemes can be instantiated under various assumptions ranging from public key encryption and NIZKs to indistinguishability
obfuscation, with different tradeoffs. To the best of our knowledge, ours is the first work to consider white box tracing in the general decoder setting.
@inproceedings{Zha21a, author = {Mark Zhandry}, title = {White Box Traitor Tracing}, booktitle = {Proceedings of CRYPTO 2021}, misc = {Full version available at \url{https://eprint.iacr.org/2021/891}}, year = {2021} }

[ZZ21] 
The Relationship Between Idealized Models Under Computationally Bounded Adversaries
By  Mark Zhandry and Cong Zhang 
[PDF]
[ePrint]
The random oracle, generic group, and generic bilinear map models (ROM, GGM, GBM, respectively) are fundamental heuristics used to justify new computational
assumptions and prove the security of efficient cryptosystems. While known to be invalid in some contrived settings, the heuristics generally seem reasonable
for realworld applications.
In this work, we ask: which heuristics are closer to reality? Or conversely, which heuristics are a larger leap? We answer this question through the
framework of computational indifferentiability, showing that the ROM is a strictly "milder" heuristic than the GGM, which in turn is strictly milder than the
GBM. While this may seem like the expected outcome, we explain why it does not follow from prior works and is not the a priori obvious conclusion. In order to
prove our results, we develop new ideas for proving computational indifferentiable separations.
@misc{ZZ21, author = {Mark Zhandry and Cong Zhang}, title = {The Relationship Between Idealized Models Under Computationally Bounded Adversaries}, misc = {Full version available at \url{https://eprint.iacr.org/2021/240}}, year = {2021} }

[KZ20] 
Towards NonInteractive Witness Hiding
TCC 2020
[PDF]
[ePrint]
Witness hiding proofs require that the verifier cannot find a witness after seeing a proof. The exact round complexity needed for witness hiding
proofs has so far remained an open question. In this work, we provide compelling evidence that witness hiding proofs are achievable noninteractively
for wide classes of languages. We use noninteractive witness indistinguishable proofs as the basis for all of our protocols. We give four schemes
in different settings under different assumptions:
• A universal noninteractive proof that is witness hiding as long as any proof system, possibly
an inefficient and/or nonuniform scheme, is witness hiding, has a known bound on verifier runtime, and has short proofs of soundness.
• A nonuniform noninteractive protocol justified under a worstcase complexity assumption that is witness hiding and efficient, but may not
have short proofs of soundness.
• A new security analysis of the twomessage argument of Pass [Crypto 2003], showing witness hiding for any nonuniformly hard distribution.
We propose a heuristic approach to removing the first message, yielding a noninteractive argument.
• A witness hiding noninteractive proof system for languages with unique witnesses, assuming the nonexistence of a weak form of witness encryption
for any language in NP ∩ coNP.
@inproceedings{KZ20, author = {Benjamin Kuykendall and Mark Zhandry}, title = {Towards NonInteractive Witness Hiding}, booktitle = {Proceedings of TCC 2020}, misc = {Full version available at \url{https://eprint.iacr.org/2020/1205}}, year = {2020} }

[Zha20b] 
Schrödinger`s Pirate: How To Trace a Quantum Decoder
TCC 2020
[PDF]
[ePrint]
[slides]
We explore the problem of traitor tracing where the pirate decoder can contain a quantum state. Our main results include:
• We show how to overcome numerous definitional challenges to give a meaningful notion of tracing for quantum decoders
• We give negative results, demonstrating barriers to adapting classical tracing algorithms to the quantum decoder setting.
• On the other hand, we show how to trace quantum decoders in the setting of (public key) private linear broadcast encryption,
capturing a common approach to traitor tracing.
@inproceedings{Zha20b, author = {Mark Zhandry}, title = {Schrödinger`s Pirate: How To Trace a Quantum Decoder}, booktitle = {Proceedings of TCC 2020}, misc = {Full version available at \url{https://eprint.iacr.org/2020/1191}}, year = {2020} }

[ZZ20] 
Indifferentiability for Public Key Cryptosystems
By  Mark Zhandry and Cong Zhang 
CRYPTO 2020
[PDF]
[ePrint]
We initiate the study of indifferentiability for public key encryption and other public key primitives. Our main results are definitions and
constructions of public key cryptosystems that are indifferentiable from ideal cryptosystems, in the random oracle model. Cryptosystems include:
• Public key encryption;
• Digital signatures;
• Noninteractive key agreement.
Our schemes are based on standard public key assumptions. By being indifferentiable from an ideal object, our schemes satisfy any security property
that can be represented as a singlestage game and can be composed to operate in higherlevel protocols.
@inproceedings{ZZ20, author = {Mark Zhandry and Cong Zhang}, title = {Indifferentiability for Public Key Cryptosystems}, booktitle = {Proceedings of CRYPTO 2020}, misc = {Full version available at \url{https://eprint.iacr.org/2019/370}}, year = {2020} }

[Zha20a] 
New Techniques for Traitor Tracing: Size N^{1/3} and More from Pairings
CRYPTO 2020
[PDF]
[ePrint]
[slides]
The best existing pairingbased traitor tracing schemes have O(√N)sized parameters, which has stood since 2006. This intuitively seems
to be consistent with the fact that pairings allow for degree2 computations, yielding a quadratic compression.
In this work, we show that this intuition is false by building a tracing scheme from pairings with O(∛N)sized parameters. We additionally give
schemes with a variety of parameter size tradeoffs, including a scheme with constantsize ciphertexts and public keys (but linearsized secret keys).
All of our schemes make blackbox use of the pairings. We obtain our schemes by developing a number of new traitor tracing techniques, giving the first
significant parameter improvements in pairingsbased traitor tracing in over a decade.
@inproceedings{Zha20a, author = {Mark Zhandry}, title = {New Techniques for Traitor Tracing: Size N^{1/3} and More from Pairings}, booktitle = {Proceedings of CRYPTO 2020}, misc = {Full version available at \url{https://eprint.iacr.org/2020/954}}, year = {2020} }

[GZ20] 
Unclonable Decryption Keys
[PDF]
[ePrint]
We initiate the study of encryption schemes where the decryption keys are unclonable quantum objects, which we call single decryptor encryption.
We give a number of initial results in this area:
• We formalize the notion of single decryptor encryption.
• We show that secretkey single decryptor encryption is possible unconditionally, in the setting where a limited number of ciphertexts are
given. However, given an encryption oracle, we show that unconditional security is impossible.
• We show how to use a very recent notion of oneshot signatures, together with sufficiently powerful witness encryption, to achieve public
key single decryptor encryption.
• We demonstrate several extensions of our scheme, achieving a number of interesting properties that are not possible classically.
@misc{GZ20, author = {Marios Georgiou and Mark Zhandry}, title = {Unclonable Decryption Keys}, misc = {Full version available at \url{https://eprint.iacr.org/2020/877}}, year = {2020} }

[LSZ20] 
Quantum Immune OneTime Memories
[PDF]
[ePrint]
Onetime memories (OTM) are the hardware version of oblivious transfer, and are useful for constructing objects that are impossible with software alone,
such as onetime programs. In this work, we consider attacks on OTMs where a quantum adversary can leverage his physical access to the memory to mount
quantum "superposition attacks" against the memory. Such attacks result in significantly weakened OTMs. For example, in the application to onetime
programs, it may appear that such an adversary can always “quantumize” the classical protocol by running it on a superposition of inputs, and therefore
learn superpositions of outputs of the protocol.
Perhaps surprisingly, we show that this intuition is false: we construct onetime programs from quantumaccessible onetime memories where the view of
an adversary, despite making quantum queries, can be simulated by making only classical queries to the ideal functionality. At the heart of our work is
a method of immunizing onetime memories against superposition attacks.
@misc{LSZ20, author = {Qipeng Liu and Amit Sahai and Mark Zhandry}, title = {Quantum Immune OneTime Memories}, misc = {Full version available at \url{https://eprint.iacr.org/2020/871}}, year = {2020} }

[AGKZ20] 
Oneshot Signatures and Applications to Hybrid Quantum/Classical Authentication
STOC 2020
[PDF]
[ePrint]
We define the notion of oneshot signatures, which are signatures where any secret key can be used to sign only a single message, and then selfdestructs.
While such signatures are of course impossible classically, we construct oneshot signatures using quantum nocloning. In particular, we show that such
signatures exist relative to a classical oracle, which we can then heuristically obfuscate using known indistinguishability obfuscation schemes.
We show that oneshot signatures have numerous applications for hybrid quantum/classical cryptographic tasks, where all communication is required to be classical,
but local quantum operations are allowed. Applications include onetime signature tokens, quantum money with classical communication, decentralized blockchainless
cryptocurrency, signature schemes with unclonable secret keys, noninteractive certifiable minentropy, and more. We thus position oneshot signatures as a powerful
new building block for novel quantum cryptographic protocols.
@inproceedings{AGKZ20, author = {Ryan Amos and Marios Georgiou and Aggelos Kiayias and Mark Zhandry}, title = {Oneshot Signatures and Applications to Hybrid Quantum/Classical Authentication}, booktitle = {Proceedings of STOC 2020}, misc = {Full version available at \url{https://eprint.iacr.org/2020/107}}, year = {2020} }

[BIJ^{+}20] 
Affine Determinant Programs: A Framework for Obfuscation and Witness Encryption
ITCS 2020
[PDF]
[ePrint]
An affine determinant program ADP: {0,1}^{n} → {0,1} is specified by a tuple (A,B_{1},…,B_{n}) of square matrices over F_{q}
and a function Eval: F_{q} → {0,1}, and evaluated on x ∈ {0,1}^{n} by computing Eval(det(A + \sum x_{i}B_{i})).
In this work, we suggest ADPs as a new framework for building generalpurpose obfuscation and witness encryption. We provide evidence to suggest that constructions
following our ADPbased framework may one day yield secure, practically feasible obfuscation.
As a proofofconcept, we give a candidate ADPbased construction of indistinguishability obfuscation for all circuits along with a simple witness encryption candidate.
We provide cryptanalysis demonstrating that our schemes resist several potential attacks, and leave further cryptanalysis to future work. Lastly, we explore practically
feasible applications of our witness encryption candidate, such as publickey encryption with nearoptimal key generation.
@inproceedings{BIJMSZ20, author = {James Bartusek and Yuval Ishai and Aayush Jain and Fermi Ma and Amit Sahai and Mark Zhandry}, title = {Affine Determinant Programs: A Framework for Obfuscation and Witness Encryption}, booktitle = {Proceedings of ITCS 2020}, misc = {Full version available at \url{https://eprint.iacr.org/2020/889}}, year = {2020} }

[Zha19c] 
How to Record Quantum Queries, and Applications to Quantum Indifferentiability
CRYPTO 2019, QCrypt 2019 (Invited)
[PDF]
[ePrint]
[slides]
The quantum random oracle model (QROM) has become the standard model in which to prove the postquantum security of randomoraclebased constructions. Unfortunately,
none of the known proof techniques allow the reduction to record information about the adversary`s queries, a crucial feature of many classical ROM proofs, including all
proofs of indifferentiability for hash function domain extension. In this work, we give a new QROM proof technique that overcomes this "recording barrier". Our central
observation is that when viewing the adversary`s query and the oracle itself in the Fourier domain, an oracle query switches from writing to the adversary`s space to
writing to the oracle itself. This allows a reduction to simulate the oracle by simply recording information about the adversary`s query in the Fourier domain.
We then use this new technique to show the indifferentiability of the MerkleDamgard domain extender for hash functions. Given the threat posed by quantum computers and
the push toward quantumresistant cryptosystems, our work represents an important tool for efficient postquantum cryptosystems.
@inproceedings{Zha19c, author = {Mark Zhandry}, title = {How to Record Quantum Queries, and Applications to Quantum Indifferentiability}, booktitle = {Proceedings of QCrypt 2019}, misc = {Full version available at \url{https://eprint.iacr.org/2018/276}}, year = {2019} }

[LZ19b] 
Revisiting PostQuantum FiatShamir
CRYPTO 2019
[PDF]
[ePrint]
The FiatShamir transformation is a useful approach to building noninteractive arguments (of knowledge) in the random oracle model.
Unfortunately, existing proof techniques are incapable of proving the security of FiatShamir in the quantum setting. The problem stems
from (1) the difficulty of quantum rewinding, and (2) the inability of current techniques to adaptively program random oracles in the
quantum setting.
In this work, we show how to overcome the limitations above in many settings. In particular, we give mild conditions under which FiatShamir
is secure in the quantum setting. As an application, we show that existing lattice signatures based on FiatShamir are secure without
any modifications.
@inproceedings{LZ19b, author = {Qipeng Liu and Mark Zhandry}, title = {Revisiting PostQuantum FiatShamir}, booktitle = {Proceedings of CRYPTO 2019}, misc = {Full version available at \url{https://eprint.iacr.org/2019/262}}, year = {2019} }

[BMZ19] 
The Distinction Between Fixed and Random Generators in GroupBased Assumptions
By  James Bartusek, Fermi Ma, and Mark Zhandry 
CRYPTO 2019
[PDF]
[ePrint]
There is surprisingly little consensus on the precise role of the generator g in groupbased assumptions such as DDH. Some works
consider g to be a fixed part of the group description, while others take it to be random. We study this subtle distinction from
a number of angles.
• In the generic group model, we demonstrate the plausibility of groups in which randomgenerator DDH (resp. CDH) is hard
but fixedgenerator DDH (resp. CDH) is easy. We observe that such groups have interesting cryptographic applications.
• We find that seemingly tight generic lower bounds for the DiscreteLog and CDH problems with preprocessing (CorriganGibbs
and Kogan, Eurocrypt 2018) are not tight in the subconstant success probability regime if the generator is random. We resolve this
by proving tight lower bounds for the random generator variants; our results formalize the intuition that using a random generator
will reduce the effectiveness of preprocessing attacks.
• We observe that DDHlike assumptions in which exponents are drawn from lowentropy distributions are particularly sensitive
to the fixed vs. randomgenerator distinction. Most notably, we discover that the Strong Power DDH assumption of Komargodski and
Yogev (Komargodski and Yogev, Eurocrypt 2018) used for nonmalleable point obfuscation is in fact false precisely because it requires
a fixed generator. In response, we formulate an alternative fixedgenerator assumption that suffices for a new construction of
nonmalleable point obfuscation, and we prove the assumption holds in the generic group model. We also give a generic group proof
for the security of fixedgenerator, lowentropy DDH (Canetti, Crypto 1997).
@inproceedings{BMZ19, author = {James Bartusek and Fermi Ma and Mark Zhandry}, title = {The Distinction Between Fixed and Random Generators in GroupBased Assumptions}, booktitle = {Proceedings of CRYPTO 2019}, misc = {Full version available at \url{https://eprint.iacr.org/2019/202}}, year = {2019} }

[Zha19b] 
Quantum Lightning Never Strikes the Same State Twice
EUROCRYPT 2019 (Best Paper Award)
[PDF]
[ePrint]
[slides]
Public key quantum money can be seen as a version of the quantum nocloning theorem that holds even when the quantum states can be verified by the adversary. In this work,
investigate quantum lightning, a formalization of "collisionfree quantum money" defined by Lutomirski et al. [ICS`10], where nocloning holds even when the
adversary herself generates the quantum state to be cloned. We then study quantum money and quantum lightning, showing the following results:
• We demonstrate the usefulness of quantum lightning beyond quantum money by showing several potential applications, such as generating random strings with a proof of
entropy, to completely decentralized cryptocurrency without a blockchain, where transactions is instant and local.
• We give winwin results for quantum money/lightning, showing that either signatures/hash functions/commitment schemes meet very strong recently proposed notions of
security, or they yield quantum money or lightning. Given the difficulty in constructing public key quantum money, this gives some indication that natural schemes do attain
strong security guarantees.
• We construct quantum lightning under the assumed multicollision resistance of random degree2 systems of polynomials. Our construction is inspired by our winwin
result for hash functions, and yields the first plausible standard model instantiation of a \emph{noncollapsing} collision resistant hash function. This improves on a
result of Unruh [Eurocrypt`16] that requires a quantum oracle.
•We show that instantiating the quantum money scheme of Aaronson and Christiano [STOC`12] with indistinguishability obfuscation that is secure against quantum
computers yields a secure quantum money scheme. This construction can be seen as an instance of our winwin result for signatures, giving the first separation between two
security notions for signatures from the literature.
Thus, we provide the first constructions of public key quantum money from several cryptographic assumptions. Along the way, we develop several new techniques including a
new precise variant of the nocloning theorem.
@inproceedings{Zha19b, author = {Mark Zhandry}, title = {Quantum Lightning Never Strikes the Same State Twice}, booktitle = {Proceedings of EUROCRYPT 2019}, misc = {Full version available at \url{https://eprint.iacr.org/2017/1080}}, year = {2019} }

[BLMZ19] 
New Techniques for Obfuscating Conjunctions
EUROCRYPT 2019
[PDF]
[ePrint]
A conjunction is a function $f(x_1,\dots,x_n) = \bigwedge_{i \in S} l_i$ where $S \subseteq [n]$ and each $l_i$ is $x_i$ or $\neg x_i$.
Bishop et al. (CRYPTO 2018) recently proposed obfuscating conjunctions by embedding them in the error positions of a noisy ReedSolomon
codeword and placing the codeword in a group exponent. They prove distributional virtual black box (VBB) security in the generic group
model for random conjunctions where $S \geq 0.226n$. While conjunction obfuscation is known from LWE, these constructions rely on
substantial technical machinery.
In this work, we conduct an extensive study of simple conjunction obfuscation techniques.
• We abstract the Bishop et al. scheme to obtain an equivalent yet more efficient "dual" scheme that handles conjunctions over exponential
size alphabets. We give a significantly simpler proof of generic group security, which we combine with a novel combinatorial argument to
obtain distributional VBB security for $S$ of any size.
• If we replace the ReedSolomon code with a random binary linear code, we can prove security from standard LPN and avoid encoding
in a group. This addresses an open problem posed by Bishop et al.~to prove security of this simple approach in the standard model.
• We give a new construction that achieves information theoretic distributional VBB security and weak functionality preservation
for $S \geq n  n^\delta$ and $\delta < 1$. Assuming discrete log and $\delta < 1/2$, we satisfy a stronger notion of functionality
preservation for computationally bounded adversaries while still achieving information theoretic security.
@inproceedings{BLMZ19, author = {James Bartusek and Tancrede Lepoint and Fermi Ma and Mark Zhandry}, title = {New Techniques for Obfuscating Conjunctions}, booktitle = {Proceedings of EUROCRYPT 2019}, misc = {Full version available at \url{https://eprint.iacr.org/2018/936}}, year = {2019} }

[Zha19a] 
On ELFs, Deterministic Encryption, and CorrelatedInput Security
EUROCRYPT 2019
[PDF]
[ePrint]
[slides]

[GZ19] 
Simple Schemes in the Bounded Storage Model
EUROCRYPT 2019
[PDF]
The bounded storage model promises unconditional security proofs against computationally unbounded adversaries, so long as the
adversary`s space is bounded. In this work, we develop simple new constructions of twoparty key agreement, bit commitment, and
oblivious transfer in this model. In addition to simplicity, our constructions have several advantages over prior work, including
an improved number of rounds and enhanced correctness. Our schemes are based on Raz`s lower bound for learning parities.
@inproceedings{GZ19, author = {Jiaxin Guan and Mark Zhandry}, title = {Simple Schemes in the Bounded Storage Model}, booktitle = {Proceedings of EUROCRYPT 2019}, misc = {Full version available at \url{}}, year = {2019} }

[LZ19a] 
On Finding Quantum Multicollisions
EUROCRYPT 2019
[PDF]
[arXiv]
A kcollision for a compressing hash function H is a set of k distinct inputs that all map to the same output. In this work, we show that
for any constant k, Θ(N^{1/2(11/(2^k1))}) quantum queries are both necessary and sufficient to achieve a kcollision with
constant probability. This improves on both the best prior upper bound (Hosoyamada et al., ASIACRYPT 2017) and provides the first nontrivial
lower bound, completely resolving the problem.
@inproceedings{LZ19a, author = {Qipeng Liu and Mark Zhandry}, title = {On Finding Quantum Multicollisions}, booktitle = {Proceedings of EUROCRYPT 2019}, misc = {Full version available at \url{https://arxiv.org/abs/1811.05385}}, year = {2019} }

[CLO^{+}18] 
ParameterHiding Order Revealing Encryption
ASIACRYPT 2018
[PDF]
[ePrint]
Orderrevealing encryption (ORE) is a popular primitive for outsourcing encrypted databases, as it allows for efficiently performing range queries over encrypted data.
Unfortunately, a series of works, starting with Naveed et al. (CCS 2015), have shown that when the adversary has a good estimate of the distribution of the data, ORE provides
little protection. In this work, we consider the case that the database entries are drawn identically and independently from a distribution of known shape, but for which the
mean and variance are not (and thus the attacks of Naveed et al. do not apply). We define a new notion of security for ORE, called parameterhiding ORE, which maintains the
secrecy of these parameters. We give a construction of ORE satisfying our new definition from bilinear maps.
@inproceedings{CLOZZ18, author = {David Cash and FengHao Liu and Adam O'Neill and Mark Zhandry and Cong Zhang}, title = {ParameterHiding Order Revealing Encryption}, booktitle = {Proceedings of ASIACRYPT 2018}, misc = {Full version available at \url{https://eprint.iacr.org/2018/698}}, year = {2018} }

[MZ18] 
New Multilinear Maps from CLT13 with Provable Security Against Zeroizing Attacks
TCC 2018
[PDF]
[ePrint]
We devise the first weak multilinear map model for CLT13 multilinear maps (Coron et al., CRYPTO 2013) that captures all known classical polynomialtime attacks on the maps.
We then show important applications of our model. First, we show that in our model, several existing obfuscation and orderrevealing encryption schemes, when instantiated with
CLT13 maps, are secure against known attacks under a mild algebraic complexity assumption used in prior work. These are schemes that are actually being implemented for
experimentation. However, until our work, they had no rigorous justification for security.
Next, we turn to building constant degree multilinear maps on top of CLT13 for which there are no known attacks. Precisely, we prove that our scheme achieves the ideal security
notion for multilinear maps in our weak CLT13 model, under a much stronger variant of the algebraic complexity assumption used above. Our multilinear maps do not achieve the
full functionality of multilinear maps as envisioned by Boneh and Silverberg (Contemporary Mathematics, 2003), but do allow for rerandomization and for encoding arbitrary
plaintext elements.
@inproceedings{MZ18, author = {Fermi Ma and Mark Zhandry}, title = {New Multilinear Maps from CLT13 with Provable Security Against Zeroizing Attacks}, booktitle = {Proceedings of TCC 2018}, misc = {Full version available at \url{https://eprint.iacr.org/2017/946}}, year = {2018} }

[ZZ18] 
Impossibility of OrderRevealing Encryption in Idealized Models
By  Mark Zhandry and Cong Zhang 
TCC 2018
[PDF]
[ePrint]
An OrderRevealing Encryption (ORE) scheme gives a public procedure by which two ciphertext can be compared to reveal the order of their underlying plaintexts. The ideal
security notion for ORE is that only the order is revealed — anything else, such as the distance between plaintexts, is hidden. The only known constructions of ORE
achieving such ideal security are based on cryptographic multilinear maps, and are currently too impractical for realworld applications. In this work, we give evidence
that building ORE from weaker tools may be hard. Indeed, we show blackbox separations between ORE and most symmetrickey primitives, as well as public key encryption and
anything else implied by generic groups in a blackbox way. Thus, any construction of ORE must either (1) achieve weaker notions of security, (2) be based on more complicated
cryptographic tools, or (3) require nonblackbox techniques. This suggests that any ORE achieving ideal security will likely be somewhat inefficient.
Central to our proof is an proof of impossibility for something we call information theoretic ORE, which has connections to tournament graphs and a theorem by Erdoös.
This impossibility proof will be useful for proving other black box separations for ORE.
@inproceedings{ZZ18, author = {Mark Zhandry and Cong Zhang}, title = {Impossibility of OrderRevealing Encryption in Idealized Models}, booktitle = {Proceedings of TCC 2018}, misc = {Full version available at \url{https://eprint.iacr.org/2017/1001}}, year = {2018} }

[BGMZ18] 
Preventing Zeroizing Attacks on GGH15
TCC 2018
[PDF]
[ePrint]
The GGH15 multilinear maps have served as the foundation for a number of cuttingedge cryptographic proposals. Unfortunately, many schemes built on GGH15 have been explicitly
broken by socalled "zeroizing attacks," which exploit leakage from honest zerotest queries. The precise settings in which zeroizing attacks are possible have remained unclear.
Most notably, none of the current indistinguishability obfuscation (iO) candidates from GGH15 have any formal security guarantees against zeroizing attacks.
In this work, we demonstrate that all known zeroizing attacks on GGH15 implicitly construct algebraic relations between the results of zerotesting and the encoded plaintext
elements. We then propose a "GGH15 zeroizing model" as a new general framework which greatly generalizes known attacks.
Our second contribution is to describe a new GGH15 variant, which we formally analyze in our GGH15 zeroizing model. We then construct a new iO candidate using our multilinear
map, which we prove secure in the GGH15 zeroizing model. This implies resistance to all known zeroizing strategies. The proof relies on the Branching Program UnAnnihilatability
(BPUA) Assumption of Garg et al. [TCC 16B] (which is implied by PRFs in NC^1 secure against P/Poly) and the complexitytheoretic pBounded Speedup Hypothesis of Miles et al.
[ePrint 14] (a strengthening of the Exponential Time Hypothesis).
@inproceedings{BGMZ18, author = {James Bartusek and Jiaxin Guan and Fermi Ma and Mark Zhandry}, title = {Preventing Zeroizing Attacks on GGH15}, booktitle = {Proceedings of TCC 2018}, misc = {Full version available at \url{https://eprint.iacr.org/2018/511}}, year = {2018} }

[BGK^{+}18] 
Multiparty NonInteractive Key Exchange and More From Isogenies on Elliptic Curves
MATHCRYPT 2018
[PDF]
[ePrint]
We describe a framework for constructing an efficient noninteractive key exchange (NIKE) protocol for n parties for any n >= 2. Our approach is based on the problem of computing
isogenies between isogenous elliptic curves, which is believed to be difficult. We do not obtain a working protocol because of a missing step that is currently an open problem.
What we need to complete our protocol is an efficient algorithm that takes as input an abelian variety presented as a product of isogenous elliptic curves, and outputs an isomorphism
invariant of the abelian variety.
Our framework builds a cryptographic invariant map, which is a new primitive closely related to a cryptographic multilinear map, but whose range does not necessarily have a group
structure. Nevertheless, we show that a cryptographic invariant map can be used to build several cryptographic primitives, including NIKE, that were previously constructed from
multilinear maps and indistinguishability obfuscation.
@inproceedings{BGKLSSTZ18, author = {Dan Boneh and Darren Glass and Daniel Krashen and Kristin Lauter and Shahed Sharif and Alice Silverberg and Mehdi Tibouchi and Mark Zhandry}, title = {Multiparty NonInteractive Key Exchange and More From Isogenies on Elliptic Curves}, booktitle = {Proceedings of MATHCRYPT 2018}, misc = {Full version available at \url{https://eprint.iacr.org/2018/665}}, year = {2018} }

[LZ17] 
Decomposable Obfuscation: A Framework for Building Applications of Obfuscation From Polynomial Hardness
TCC 2017
[PDF]
[ePrint]
There is some evidence that indistinguishability obfuscation (iO) requires either exponentially many assumptions or (sub)exponentially hard assumptions, and indeed, all
known ways of building obfuscation suffer one of these two limitations. As such, any application built from iO suffers from these limitations as well. However, for most
applications, such limitations do not appear to be inherent to the application, just the approach using iO. Indeed, several recent works have shown how to base applications
of iO instead on functional encryption (FE), which can in turn be based on the polynomial hardness of just a few assumptions. However, these constructions are quite
complicated and recycle a lot of similar techniques.
In this work, we unify the results of previous works in the form of a weakened notion of obfuscation, called Decomposable Obfuscation. We show (1) how to build decomposable obfuscation from
functional encryption, and (2) how to build a variety of applications from decomposable obfuscation, including all of the applications already known from FE. The construction in (1)
hides most of the difficult techniques in the prior work, whereas the constructions in (2) are much closer to the comparatively simple constructions from iO. As such,
decomposable obfuscation represents a convenient new platform for obtaining more applications from polynomial hardness.
@inproceedings{LZ17, author = {Qipeng Liu and Mark Zhandry}, title = {Decomposable Obfuscation: A Framework for Building Applications of Obfuscation From Polynomial Hardness}, booktitle = {Proceedings of TCC 2017}, misc = {Full version available at \url{https://eprint.iacr.org/2017/209}}, year = {2017} }

[GYZ17] 
New Security Notions and Feasibility Results for Authentication of Quantum Data
QCrypt 2016, CRYPTO 2017
[PDF]
[arXiv]
We give a new class of security definitions for authentication in the quantum setting. Our definitions capture and strengthen several existing definitions,
including superposition attacks on classical authentication, as well as full authentication of quantum data. We argue that our definitions resolve some of
the shortcomings of existing definitions.
We then give several feasibility results for our strong definitions. As a consequence, we obtain several interesting results, including:
(1) the classical CarterWegman authentication scheme with 3universal hashing is secure against superposition attacks, as well as adversaries with quantum
side information;
(2) quantum authentication where the entire key can be reused if verification is successful;
(3) conceptually simple constructions of quantum authentication; and
(4) a conceptually simple QKD protocol.
@inproceedings{GYZ17, author = {Sumegha Garg and Henry Yuen and Mark Zhandry}, title = {New Security Notions and Feasibility Results for Authentication of Quantum Data}, booktitle = {Proceedings of CRYPTO 2017}, misc = {Full version available at \url{https://arxiv.org/abs/1607.07759}}, year = {2017} }

[GPSZ17] 
Breaking the SubExponential Barrier in Obfustopia
EUROCRYPT 2017
[PDF]
[ePrint]
Indistinguishability obfuscation (iO) has emerged as a surprisingly powerful notion. Almost all known cryptographic primitives can be constructed from general purpose iO and
other minimalistic assumptions such as oneway functions. The primary challenge in this direction of research is to develop novel techniques for using iO since iO by itself
offers virtually no protection to secret information in the underlying programs. When dealing with complex situations, often these techniques have to consider an exponential
number of hybrids (usually one per input) in the security proof. This results in a subexponential loss in the security reduction. Unfortunately, this scenario is becoming more
and more common and appears to be a fundamental barrier to current techniques.
In this work, we explore the possibility of getting around this subexponential loss barrier
in constructions based on iO as well as the weaker notion of functional encryption (FE). Towards this goal, we achieve the following results:
• We construct trapdoor oneway permutations from polynomiallyhard iO (and standard oneway permutations). This improves upon the recent result of Bitansky, Paneth, and
Wichs (TCC 2016) which requires iO of subexponential strength.
• We present a different construction of trapdoor oneway permutations based on standard, polynomiallysecure, publickey functional encryption. This qualitatively improves
upon our first result since FE is a weaker primitive than iO — it can be based on polynomiallyhard assumptions on multilinear maps whereas iO inherently seems to requires
assumptions of subexponential strength.
• We present a construction of universal samplers also based only on polynomiallysecure publickey FE. Universal samplers, introduced in the work of Hofheinz, Jager, Khurana,
Sahai, Waters and Zhandry (EPRINT 2014), is an appealing notion which allows a single trusted setup for any protocol. As an application of this result, we construct a noninteractive
multiparty key exchange (NIKE) protocol for an unbounded number of users without a trusted setup. Prior to this work, such constructions were only known from indistinguishability
obfuscation.
In obtaining our results, we build upon and significantly extend the techniques of Garg, Pandey, and Srinivasan (EPRINT 2015) introduced in the context of reducing PPADhardness
to polynomiallysecure iO and FE.
@inproceedings{GPSZ17, author = {Sanjam Garg and Omkant Pandey and Akshayaram Srinivasan and Mark Zhandry}, title = {Breaking the SubExponential Barrier in Obfustopia}, booktitle = {Proceedings of EUROCRYPT 2017}, misc = {Full version available at \url{https://eprint.iacr.org/2016/102}}, year = {2017} }

[MZ17] 
Encryptor Combiners: A Unified Approach to Multiparty NIKE, (H)IBE, and Broadcast Encryption
[PDF]
[ePrint]
We define the concept of an encryptor combiner. Roughly, such a combiner takes as input n public keys for a public key encryption scheme, and produces a new combined
public key. Anyone knowing a secret key for one of the input public keys can learn the secret key for the combined public key, but an outsider who just knows the input
public keys (who can therefore compute the combined public key for himself) cannot decrypt ciphertexts from the combined public key. We actually think of public keys
more generally as encryption procedures, which can correspond to, say, encrypting to a particular identity under an IBE scheme or encrypting to a set of attributes
under an ABE scheme.
We show that encryptor combiners satisfying certain natural properties can give natural constructions of multiparty noninteractive key exchange, lowoverhead
broadcast encryption, and hierarchical identitybased encryption. We then show how to construct two different encryptor combiners. Our first is built from universal
samplers (which can in turn be built from indistinguishability obfuscation) and is sufficient for each application above, in some cases improving on existing
obfuscationbased constructions. Our second is built from lattices, and is sufficient for hierarchical identitybased encryption. Thus, encryptor combiners
serve as a new abstraction that (1) is a useful tool for designing cryptosystems, (2) unifies constructing hierarchical IBE from vastly different assumptions,
and (3) provides a target for instantiating obfuscation applications from better tools.
@misc{MZ17, author = {Fermi Ma and Mark Zhandry}, title = {Encryptor Combiners: A Unified Approach to Multiparty NIKE, (H)IBE, and Broadcast Encryption}, misc = {Full version available at \url{https://eprint.iacr.org/2017/152}}, year = {2017} }

[HJK^{+}16] 
How to Generate and use Universal Samplers
ASIACRYPT 2016
[PDF]
[ePrint]
The random oracle is an idealization that allows to model a hash function as an oracle that will output a uniformly random
string given an input. We introduce the notion of universal sampler scheme as a method sampling securely from
arbitrary distributions.
We first motivate such a notion by describing several applications including generating
the trusted parameters for many schemes from just a single trusted setup. We further demonstrate the versatility of universal
sampler by showing how they give rise to applications such as identitybased encryption and multiparty key exchange.
We give a solution in the random oracle model based on indistinguishability obfuscation. At the heart of our construction and
proof is a new technique we call "delayed backdoor programming".
@inproceedings{HJKSWZ16, author = {Dennis Hofheinz and Tibor Jager and Dakshita Khurana and Amit Sahai and Brent Waters and Mark Zhandry}, title = {How to Generate and use Universal Samplers}, booktitle = {Proceedings of ASIACRYPT 2016}, misc = {Full version available at \url{http://eprint.iacr.org/2014/507}}, year = {2016} }

[Zha16c] 
A Note on QuantumSecure PRPs
[PDF]
[ePrint]
We show how to construct pseudorandom permutations (PRPs) that remain secure even if the adversary can query the permutation on a quantum superposition of inputs.
Such PRPs are called quantumsecure. Our construction combines a quantumsecure pseudorandom function together with constructions of classical
format preserving encryption. By combining known results, we obtain the first quantumsecure PRP in this model whose security relies only on the existence of oneway
functions. Previously, to the best of the author`s knowledge, quantum security of PRPs had to be assumed, and there were no prior security reductions to simpler
primitives, let alone oneway functions.
@misc{Zha16c, author = {Mark Zhandry}, title = {A Note on QuantumSecure PRPs}, misc = {Full version available at \url{https://eprint.iacr.org/2016/1076}}, year = {2016} }

[KMUZ16] 
Strong Hardness of Privacy from Weak Traitor Tracing
TCC 2016B
[PDF]
[ePrint]
Despite much study, the computational complexity of differential privacy remains poorly understood. In this paper we consider the computational complexity of
accurately answering a family Q of statistical queries over a data universe X under differential privacy. A statistical query
on a dataset D∈X^{n} asks "what fraction of the elements of D satisfy a given predicate p on X?" Dwork et al. (STOC`09) and Boneh and Zhandry
(CRYPTO`14) showed that if both Q and X are of polynomial size, then there is an efficient differentially private algorithm that accurately answers all the
queries, and if both Q and X are exponential size, then under a plausible assumption, no efficient algorithm exists.
We show that, under the same assumption, if either the number of queries or the data universe is of exponential size, then there is no
differentially private algorithm that answers all the queries. Specifically, we prove that if oneway functions and indistinguishability obfuscation exist, then:
• For every n, there is a family Q of O(n^{7}) queries on a data universe X of size 2^{d} such that no poly(n,d) time differentially private
algorithm takes a dataset D∈X^{n} and outputs accurate answers to every query in Q.
•For every n, there is a family Q of 2^{d} queries on a data universe X of size O(n^{7}) such that no poly(n,d) time differentially private
algorithm takes a dataset D∈X^{n} and outputs accurate answers to every query in Q.
In both cases, the result is nearly quantitatively tight, since there is an efficient differentially private algorithm that answers Ω(n^{2}) queries
on an exponential size data universe, and one that answers exponentially many queries on a data universe of size Ω(n^{2}).
Our proofs build on the connection between hardness results in differential privacy and traitortracing schemes (Dwork et al., STOC`09; Ullman, STOC`13).
We prove our hardness result for a polynomial size query set (resp., data universe) by showing that they follow from the existence of a special type of
traitortracing scheme with very short ciphertexts (resp., secret keys), but very weak security guarantees, and then constructing such a scheme.
@inproceedings{KMUZ16, author = {Lucas Kowalczyk and Tal Malkin and Jonathan Ullman and Mark Zhandry}, title = {Strong Hardness of Privacy from Weak Traitor Tracing}, booktitle = {Proceedings of TCC 2016B}, misc = {Full version available at \url{https://eprint.iacr.org/2016/721}}, year = {2016} }

[GMM^{+}16] 
Secure Obfuscation in a Weak Multilinear Map Model
TCC 2016B
[PDF]
[ePrint]
All known candidate indistinguishibility obfuscation (iO) schemes rely on candidate multilinear maps. Until recently, the strongest proofs of security
available for iO candidates were in a generic model that only allows "honest" use of the multilinear map. Most notably, in this model the zerotest procedure
only reveals whether an encoded element is 0, and nothing more.
However, this model is inadequate: there have been several attacks on multilinear maps
that exploit extra information revealed by the zerotest procedure. In particular, Miles, Sahai and Zhandry [Crypto`16] recently gave a polynomialtime attack
on several iO candidates when instantiated with the multilinear maps of Garg, Gentry, and Halevi [Eurocrypt`13], and also proposed a new "weak multilinear map
model" that captures all known polynomialtime attacks on GGH13.
In this work, we give a new iO candidate which can be seen as a small modification or
generalization of the original candidate of Garg, Gentry, Halevi, Raykova, Sahai, and Waters [FOCS`13]. We prove its security in the weak multilinear map model,
thus giving the first iO candidate that is provably secure against all known polynomialtime attacks on GGH13. The proof of security relies on a new assumption
about the hardness of computing annihilating polynomials, and we show that this assumption is implied by the existence of pseudorandom functions in NC^1.
@inproceedings{GMMSSZ16, author = {Sanjam Garg and Eric Miles and Pratyay Mukherjee and Amit Sahai and Akshayaram Srinivasan and Mark Zhandry}, title = {Secure Obfuscation in a Weak Multilinear Map Model}, booktitle = {Proceedings of TCC 2016B}, misc = {Full version available at \url{https://eprint.iacr.org/2016/817}}, year = {2016} }
Merged version of [BMS16] and [MSZ16]

[Zha16b] 
The Magic of ELFs
CRYPTO 2016 (Best Young Researcher Award)
[PDF]
[ePrint]
[slides]
We introduce the notion of an Extremely Lossy Function (ELF). An ELF is a family of functions with an image size that is tunable anywhere from injective to having a
polynomialsized image. Moreover, for any efficient adversary, for a sufficiently large polynomial r (necessarily chosen to be larger than the running time of the adversary),
the adversary cannot distinguish the injective case from the case of image size r.
We develop a handful of techniques for using ELFs, and show that such extreme lossiness is useful for instantiating random oracles in several settings. In particular, we show
how to use ELFs to build secure point function obfuscation with auxiliary input, as well as polynomiallymany hardcore bits for any oneway function. Such applications were
previously known from strong knowledge assumptions — for example polynomiallymany hardcore bits were only know from differing inputs obfuscation, a notion whose
plausibility has been seriously challenged. We also use ELFs to build a simple hash function with output intractability, a new notion we define that may be useful for
generating common reference strings.
Next, we give a construction of ELFs relying on the exponential hardness of the decisional DiffieHellman problem, which is plausible in elliptic curve groups. Combining
with the applications above, our work gives several practical constructions relying on qualitatively different — and arguably better — assumptions than prior works.
@inproceedings{Zha16b, author = {Mark Zhandry}, title = {The Magic of ELFs}, booktitle = {Proceedings of CRYPTO 2016}, misc = {Full version available at \url{https://eprint.iacr.org/2016/114}}, year = {2016} }

[MSZ16b] 
Annihilation Attacks for Multilinear Maps: Cryptanalysis of Indistinguishability Obfuscation over GGH13
CRYPTO 2016
[PDF]
[ePrint]

[MSZ16a] 
Secure Obfuscation in a Weak Multilinear Map Model: A Simple Construction Secure Against All Known Attacks
[PDF]
[ePrint]
All known candidate indistinguishibility obfuscation (iO) schemes rely on candidate multilinear maps. Until recently, the strongest proofs of security available
for iO candidates were in a generic model that only allows "honest" use of the multilinear map. Most notably, in this model the zerotest procedure only reveals
whether an encoded element is 0, and nothing more.
However, this model is inadequate: there have been several attacks on multilinear maps that exploit extra information revealed by the zerotest procedure. In
particular, the authors [Crypto`16] recently gave a polynomialtime attack on several iO candidates when instantiated with the multilinear maps of Garg, Gentry,
and Halevi [Eurocrypt`13], and also proposed a new "weak multilinear map model" that captures all known polynomialtime attacks on GGH13.
Subsequent to those attacks, Garg, Mukherjee, and Srinivasan [ePrint`16] gave a beautiful new candidate iO construction, using a new variant of the GGH13
multilinear map candidate, and proved its security in the weak multilinear map model assuming an explicit PRF in NC^1.
In this work, we give a simpler candidate iO construction, which can be seen as a small modification or generalization of the original iO candidate of Garg,
Gentry, Halevi, Raykova, Sahai, and Waters [FOCS`13], and we prove its security in the weak multilinear map model. Our work has a number of benefits over
that of GMS16.
• Our construction and analysis are simpler. In particular, the proof of our security theorem is 4 pages, versus 15 pages in GMS16.
• We do not require any change to the original GGH13 multilinear map candidate.
• We prove the security of our candidate under a more general assumption. One way that our assumption can be true is if there exists a PRF in NC^1.
• GMS16 required an explicit PRF in NC^1 to be "hardwired" into their obfuscation candidate. In contrast, our scheme does not require any such hardwiring.
In fact, roughly speaking, our obfuscation candidate will depend only on the minimal size of such a PRF, and not on any other details of the PRF.
@misc{MSZ16a, author = {Eric Miles and Amit Sahai and Mark Zhandry}, title = {Secure Obfuscation in a Weak Multilinear Map Model: A Simple Construction Secure Against All Known Attacks}, misc = {Full version available at \url{https://eprint.iacr.org/2016/588}}, year = {2016} }

[BMSZ16] 
PostZeroizing Obfuscation: New Mathematical Tools, and the Case of Evasive Circuits
EUROCRYPT 2016
[PDF]
[ePrint]
[slides]
Recent devastating attacks by Cheon et al.~[Eurocrypt`15] and others have highlighted significant gaps in our intuition about security in candidate
multilinear map schemes, and in candidate obfuscators that use them. The new attacks, and some that were previously known, are typically called
"zeroizing" attacks because they all crucially rely on the ability of the adversary to create encodings of 0.
In this work, we initiate the study of postzeroizing obfuscation, and we present a construction for the special case of evasive functions. We
show that our obfuscator survives all known attacks on the underlying multilinear maps, by proving that no encodings of 0 can be created by
a genericmodel adversary. Previous obfuscators (for both evasive and general functions) were either analyzed in a lessconservative "prezeroizing"
model that does not capture recent attacks, or were proved secure relative to assumptions that are now known to be false.
To prove security, we introduce a new technique for analyzing polynomials over multilinear map encodings. This technique shows that the types of
encodings an adversary can create are much more restricted than was previously known, and is a crucial step toward achieving postzeroizing security.
We also believe the technique is of independent interest, as it yields efficiency improvements for existing schemes.
@inproceedings{BMSZ16, author = {Saikrishna Badrinarayanan and Eric Miles and Amit Sahai and Mark Zhandry}, title = {PostZeroizing Obfuscation: New Mathematical Tools, and the Case of Evasive Circuits}, booktitle = {Proceedings of EUROCRYPT 2016}, misc = {Full version available at \url{http://eprint.iacr.org/2015/167}}, year = {2016} }

[NWZ16] 
Anonymous Traitor Tracing: How to Embed Arbitrary Information in a Key
EUROCRYPT 2016
[PDF]
[ePrint]

[KZ16] 
CuttingEdge Cryptography Through the Lens of Secret Sharing
TCC 2016A
[PDF]
[ePrint]
Secret sharing is a mechanism by which a trusted dealer holding a secret "splits" a secret into many "shares" and distributes the shares to a collection
of parties. Associated with the sharing is a monotone access structure, that specifies which parties are "qualified" and which are not: any qualified subset
of parties can (efficiently) reconstruct the secret, but no unqualified subset can learn anything about the secret. In the most general form of secret sharing,
the access structure can be any monotone NP language.
In this work, we consider two very natural extensions of secret sharing. In the first, which we
call distributed secret sharing, there is no trusted dealer at all, and instead the role of the dealer is distributed amongst the parties themselves. Distributed
secret sharing can be thought of as combining the features of multiparty noninteractive key exchange and standard secret sharing, and may be useful in settings
where the secret is so sensitive that no one individual dealer can be trusted with the secret. Our second notion is called functional secret sharing, which incorporates
some of the features of functional encryption into secret sharing by providing more finegrained access to the secret. Qualified subsets of parties do not learn the
secret, but instead learn some function applied to the secret, with each set of parties potentially learning a different function.
Our main result is that both
of the extensions above are equivalent to several recent cuttingedge primitives. In particular, generalpurpose distributed secret sharing is equivalent to witness
PRFs, and generalpurpose functional secret sharing is equivalent to indistinguishability obfuscation. Thus, our work shows that it is possible to view some of the
recent developments in cryptography through a secret sharing lens, yielding new insights about both these cuttingedge primitives and secret sharing.
@inproceedings{KZ16, author = {Ilan Komargodski and Mark Zhandry}, title = {CuttingEdge Cryptography Through the Lens of Secret Sharing}, booktitle = {Proceedings of TCC 2016A}, misc = {Full version available at \url{http://eprint.iacr.org/2015/735}}, year = {2016} }

[GGHZ16] 
Functional Encryption without Obfuscation
TCC 2016A
[PDF]
[ePrint]
[slides]
Previously known functional encryption (FE) schemes for general circuits relied on indistinguishability obfuscation, which in
turn either relies on an exponential number of assumptions (basically, one per circuit), or a polynomial set of assumptions, but
with an exponential loss in the security reduction. Additionally these schemes are proved in an unrealistic selective security
model, where the adversary is forced to specify its target before seeing the public parameters. For these constructions, full
security can be obtained but at the cost of an exponential loss in the security reduction.
In this work, we overcome the above limitations and realize a fully secure functional encryption scheme without using indistinguishability
obfuscation. Specifically the security of our scheme relies only on the polynomial hardness of simple assumptions on multilinear maps.
@inproceedings{GGHZ16, author = {Sanjam Garg and Craig Gentry and Shai Halevi and Mark Zhandry}, title = {Functional Encryption without Obfuscation}, booktitle = {Proceedings of TCC 2016A}, misc = {Full version available at \url{http://eprint.iacr.org/2014/666}}, year = {2016} }

[Zha16a] 
How to Avoid Obfuscation Using Witness PRFs
TCC 2016A
[PDF]
[ePrint]
We propose a new cryptographic primitive called witness pseudorandom functions (witness PRFs). Witness PRFs are related to
witness encryption, but appear strictly stronger: we show that witness PRFs can be used for applications such as multiparty key
exchange without trsuted setup, polynomiallymany hardcore bits for any oneway function, and several others that were previously
only possible using obfuscation. Current candidate obfuscators are far from practical and typically rely on unnatural hardness
assumptions about multilinear maps. We give a construction of witness PRFs from multilinear maps that is simpler and much more
efficient than current obfuscation candidates, thus bringing several applications of obfuscation closer to practice. Our construction
relies on new but very natural hardness assumptions about the underlying maps that appear to be resistant to a recent line of attacks.
@inproceedings{Zha16a, author = {Mark Zhandry}, title = {How to Avoid Obfuscation Using Witness PRFs}, booktitle = {Proceedings of TCC 2016A}, misc = {Full version available at \url{http://eprint.iacr.org/2014/301}}, year = {2016} }

[BZ16] 
OrderRevealing Encryption and the Hardness of Private Learning
TCC 2016A
[PDF]
[arXiv]
An orderrevealing encryption scheme gives a public procedure by which two ciphertexts can be compared to reveal the ordering of their underlying
plaintexts. We show how to use orderrevealing encryption to separate computationally efficient PAC learning from efficient (ε,δ)differentially
private PAC learning. That is, we construct a concept class that is efficiently PAC learnable, but for which every efficient learner fails to be differentially
private. This answers a question of Kasiviswanathan et al. (FOCS `08, SIAM J. Comput. `11).
To prove our result, we give a generic transformation from an orderrevealing encryption scheme into one with strongly correct comparison, which enables the
consistent comparison of ciphertexts that are not obtained as the valid encryption of any message. We believe this construction may be of independent interest.
@inproceedings{BZ16, author = {Mark Bun and Mark Zhandry}, title = {OrderRevealing Encryption and the Hardness of Private Learning}, booktitle = {Proceedings of TCC 2016A}, misc = {Full version available at \url{http://arxiv.org/abs/1505.00388}}, year = {2016} }

[Zha15c] 
Quantum Oracle Classification: The Case of Group Structure
[PDF]
[arXiv]
The Quantum Oracle Classification (QOC) problem is to classify a function, given only quantum black box access, into one of several classes without necessarily determining
the entire function. Generally, QOC captures a very wide range of problems in quantum query complexity. However, relatively little is known about many of these problems.
In this work, we analyze the a subclass of the QOC problems where there is a group structure. That is, suppose the range of the unknown function A is a commutative group G, which
induces a commutative group law over the entire function space. Then we consider the case where A is drawn uniformly at random from some subgroup A of the function space.
Moreover, there is a homomorpism f on A, and the goal is to determine f(A). This class of problems is very general, and covers several interesting cases, such as oracle evaluation;
polynomial interpolation, evaluation, and extrapolation; and parity. These problems are important in the study of message authentication codes in the quantum setting, and may have
other applications.
We exactly characterize the quantum query complexity of every instance of QOC with group structure in terms of a particular counting problem. That is,
we provide an algorithm for this general class of problems whose success probability is determined by the solution to the counting problem, and prove its exact optimality.
Unfortunately, solving this counting problem in general is a nontrivial task, and we resort to analyzing special cases. Our bounds unify some existing results, such as the
existing oracle evaluation and parity bounds. In the case of polynomial interpolation and evaluation, our bounds give new results for secret sharing and information theoretic
message authentication codes in the quantum setting.
@misc{Zha15c, author = {Mark Zhandry}, title = {Quantum Oracle Classification: The Case of Group Structure}, misc = {Full version available at \url{http://arxiv.org/abs/1510.08352}}, year = {2015} }

[Zha15b] 
Secure IdentityBased Encryption in the Quantum Random Oracle Model
CRYPTO 2012, International Journal of Quantum Information
[PDF]
[ePrint]
[slides]
We give the first proof of security for an identitybased encryption scheme in the quantum random
oracle model. This is the first proof of security for any scheme in this model that requires no
additional assumptions. Our techniques are quite general and we use them to obtain security proofs for
two random oracle hierarchical identitybased encryption schemes and a random oracle signature scheme,
all of which have previously resisted quantum security proofs, even using additional assumptions. We also
explain how to remove the extra assumptions from prior quantum random oracle model proofs. We accomplish
these results by developing new tools for arguing that quantum algorithms cannot distinguish between two
oracle distributions. Using a particular class of oracle distributions, so called semiconstant
distributions, we argue that the aforementioned cryptosystems are secure against quantum adversaries.
@article{Zha15b, author = {Mark Zhandry}, title = {Secure IdentityBased Encryption in the Quantum Random Oracle Model}, journal = {International Journal of Quantum Information}, volume = {13}, number = {4}, misc = {Full version available at \url{http://eprint.iacr.org/2012/076}}, year = {2015} }

[Zha15a] 
A Note on the Quantum Collision and Set Equality Problems
Quantum Information and Computation
[PDF]
[arXiv]
The results showing a quantum query complexity of Θ(N^{1/3}) for the collision problem do not apply to
random functions. The issues are twofold. First, the Ω(N^{1/3}) lower bound only applies when the range
is no larger than the domain, which precludes many of the cryptographically interesting applications. Second, most of
the results in the literature only apply to rto1 functions, which are quite different from random functions.
Understanding the collision problem for random functions is of great importance to cryptography, and we seek to fill the
gaps of knowledge for this problem. To that end, we prove that, as expected, a quantum query complexity of
Θ(N^{1/3}) holds for all interesting domain and range sizes. Our proofs are simple, and combine existing
techniques with several novel tricks to obtain the desired results.
Using our techniques, we also give an optimal Ω(N^{1/3}) lower bound for the set equality problem. This new lower
bound can be used to improve the relationship between classical randomized query complexity and quantum query complexity for
socalled permutationsymmetric functions.
@article{Zha15a, author = {Mark Zhandry}, title = {A Note on the Quantum Collision and Set Equality Problems}, journal = {Quantum Information and Computation}, volume = {15}, number = {7\& 8}, misc = {Full version available at \url{http://arxiv.org/abs/1312.1027}}, year = {2015} }

[BLR^{+}15] 
Semantically Secure OrderRevealing Encryption: MultiInput Functional Encryption Without Obfuscation
EUROCRYPT 2015
[PDF]
[ePrint]

[SZ14] 
Obfuscating LowRank Matrix Branching Programs
[PDF]
[ePrint]
In this work, we seek to extend the capabilities of the "core obfuscator" from the work of Garg, Gentry, Halevi, Raykova, Sahai,
and Waters (FOCS 2013), and all subsequent works constructing generalpurpose obfuscators. This core obfuscator builds upon
approximate multilinear maps, and applies to matrix branching programs. All previous works, however, limited the applicability of
such core obfuscators to matrix branching programs where each matrix was of full rank. As we illustrate by example, this
limitation is quite problematic, and intuitively limits the core obfuscator to obfuscating matrix branching programs that cannot
"forget." At a technical level, this limitation arises in previous work because all previous work relies on Kilian`s statistical
simulation theorem, which is false when applied to matrices not of full rank.
In our work, we build the first core
obfuscator that can apply to matrix branching programs where matrices can be of arbitrary rank. We prove security of our
obfuscator in the generic multilinear model, demonstrating a new proof technique that bypasses Kilian`s statistical simulation
theorem. Furthermore, our obfuscator achieves two other notable advances over previous work:
• Our construction allows for nonsquare matrices of arbitrary dimensions. We also show that this flexibility yields
concrete efficiency gains.
• Our construction allows for a single obfuscation to yield multiple bits of output. All previous work yielded only one bit
of output.
Our work leads to significant efficiency gains for obfuscation. Furthermore, our work can be applied to achieve efficiency gains
even in applications not directly using obfuscation.
@misc{SZ14, author = {Amit Sahai and Mark Zhandry}, title = {Obfuscating LowRank Matrix Branching Programs}, misc = {Full version available at \url{http://eprint.iacr.org/2014/773}}, year = {2014} }
Subsumed by [BMSZ16]

[Zha14] 
Adaptively Secure Broadcast Encryption with Small System Parameters
[PDF]
[ePrint]
We build the first publickey broadcast encryption systems that simultaneously achieve adaptive security against arbitrary number
of colluders, have small system parameters, and have security proofs that do not rely on knowledge assumptions or complexity
leveraging. Our schemes are built from either composite order multilinear maps or obfuscation and enjoy a ciphertext overhead,
private key size, and public key size that are all polylogarithmic in the total number of users. Previous broadcast schemes with
similar parameters are either proven secure in a weaker static model, or rely on nonfalsifiable knowledge assumptions.
@misc{Zha14, author = {Mark Zhandry}, title = {Adaptively Secure Broadcast Encryption with Small System Parameters}, misc = {Full version available at \url{http://eprint.iacr.org/2014/757}}, year = {2014} }

[BZ14] 
Multiparty Key Exchange, Efficient Traitor Tracing, and More from Indistinguishability Obfuscation
CRYPTO 2014
[PDF]
[ePrint]
[slides]
In this work, we show how to use indistinguishability obfuscation (iO) to build multiparty key exchange,
efficient broadcast encryption, and efficient traitor tracing. Our schemes enjoy several interesting
properties that have not been achievable before:
• Our multiparty noninteractive key exchange protocol does not require a trusted setup. Moreover,
the size of the published value from each user is independent of the total number of users.
• Our broadcast encryption schemes support distributed setup, where users choose their own
secret keys rather than be given secret keys by a trusted entity. The broadcast ciphertext size is
independent of the number of users.
• Our traitor tracing system is fully collusion resistant with short ciphertexts, secret keys,
and public key. Ciphertext size is logarithmic in the number of users and secret key size is independent
of the number of users. Our public key size is polylogarithmic in the number of users. The recent
functional encryption system of Garg, Gentry, Halevi, Raykova, Sahai, and Waters also leads to a traitor
tracing scheme with similar ciphertext and secret key size, but the construction in this paper is simpler
and more direct. These constructions resolve an open problem relating to differential privacy.
• Generalizing our traitor tracing system gives a private broadcast encryption scheme (where broadcast
ciphertexts reveal minimal information about the recipient set) with optimal size ciphertext.
Several of our proofs of security introduce new tools for proving security using indistinguishability obfuscation.
@inproceedings{BZ14, author = {Dan Boneh and Mark Zhandry}, title = {Multiparty Key Exchange, Efficient Traitor Tracing, and More from Indistinguishability Obfuscation}, booktitle = {Proceedings of CRYPTO 2014}, misc = {Full version available at \url{http://eprint.iacr.org/2013/642}}, year = {2014} }

[BWZ14] 
Low Overhead Broadcast Encryption from Multilinear Maps
CRYPTO 2014
[PDF]
[ePrint]
[slides]
We use multilinear maps to provide a solution to the longstanding problem of publickey broadcast encryption where all
parameters in the system are small. In our constructions, ciphertext overhead, private key size, and public key size are
all polylogarithmic in the total number of users. The systems are fully collusionresistant against any number of colluders.
All our systems are based on an O(log N)way multilinear map to support a broadcast system for N users. We present three
constructions based on different types of multilinear maps and providing different security guarantees. Our systems naturally
give identitybased broadcast systems with short parameters.
@inproceedings{BWZ14, author = {Dan Boneh and Brent Waters and Mark Zhandry}, title = {Low Overhead Broadcast Encryption from Multilinear Maps}, booktitle = {Proceedings of CRYPTO 2014}, misc = {Full version available at \url{http://eprint.iacr.org/2014/195}}, year = {2014} }

[GGHZ14] 
Fully Secure Attribute Based Encryption from Multilinear Maps
[PDF]
[ePrint]
We construct the first fully secure attribute based encryption (ABE) scheme that can handle access control policies
expressible as polynomialsize circuits. Previous ABE schemes for general circuits were proved secure only in an unrealistic
selective security model, where the adversary is forced to specify its target before seeing the public parameters, and full
security could be obtained only by complexity leveraging, where the reduction succeeds only if correctly guesses the adversary's
target string x^*, incurring a 2^{x^*} loss factor in the tightness of the reduction.
At a very high level, our basic ABE scheme is reminiscent of Yao's garbled circuits, with 4 gadgets per gate of the circuit, but
where the decrypter in our scheme puts together the appropriate subset of gate gadgets like puzzle pieces by using a cryptographic
multilinear map to multiply the pieces together. We use a novel twist of Waters' dual encryption methodology to prove the full
security of our scheme. Most importantly, we show how to preserve the delicate informationtheoretic argument at the heart of Waters'
dual system by enfolding it in an informationtheoretic argument similar to that used in Yao's garbled circuits.
@misc{GGHZ14, author = {Sanjam Garg and Craig Gentry and Shai Halevi and Mark Zhandry}, title = {Fully Secure Attribute Based Encryption from Multilinear Maps}, misc = {Full version available at \url{http://eprint.iacr.org/2014/622}}, year = {2014} }

[ABG^{+}13] 
DifferingInputs Obfuscation and Applications
[PDF]
[ePrint]

[BZ13b] 
Secure Signatures and Chosen Ciphertext Security in a Quantum Computing World
CRYPTO 2013
[PDF]
[ePrint]
[slides]
We initiate the study of quantumsecure digital signatures and quantum chosen ciphertext
security. In the case of signatures, we enhance the standard chosen message query model by allowing the
adversary to issue quantum chosen message queries: given a superposition of messages, the adversary
receives a superposition of signatures on those messages. Similarly, for encryption, we allow the adversary
to issue quantum chosen ciphertext queries: given a superposition of ciphertexts, the adversary
receives a superposition of their decryptions. These adversaries model a natural postquantum environment
where endusers sign messages and decrypt ciphertexts on a personal quantum computer.
We construct
classical systems that remain secure when exposed to such quantum queries. For signatures we construct two
compilers that convert classically secure signatures into signatures secure in the quantum setting and apply
these compilers to existing postquantum signatures. We also show that standard constructions such as Lamport
onetime signatures and Merkle signatures remain secure under quantum chosen message attacks, thus giving
signatures whose quantum security is based on generic assumptions. For encryption, we define security under
quantum chosen ciphertext attacks and present both publickey and symmetrickey constructions.
@inproceedings{BZ13b, author = {Dan Boneh and Mark Zhandry}, title = {Secure Signatures and Chosen Ciphertext Security in a Quantum Computing World}, booktitle = {Proceedings of CRYPTO 2013}, misc = {Full version available at \url{http://eprint.iacr.org/2013/088}}, year = {2013} }

[BZ13a] 
QuantumSecure Message Authentication Codes
EUROCRYPT 2013
[PDF]
[ePrint]
[slides]
We construct the first Message Authentication Codes (MACs) that are existentially unforgeable against
a quantum chosen message attack. These chosen message attacks model a quantum adversary's
ability to obtain the MAC on a superposition of messages of its choice. We begin by showing that a
quantum secure PRF is sufficient for constructing a quantum secure MAC, a fact that is considerably harder
to prove than its classical analogue. Next, we show that a variant of CarterWegman MACs can be proven to
be quantum secure. Unlike the classical settings, we present an attack showing that a pairwise independent
hash family is insufficient to construct a quantum secure onetime MAC, but we prove that a fourwise
independent family is sufficient for onetime security.
@inproceedings{BZ13a, author = {Dan Boneh and Mark Zhandry}, title = {QuantumSecure Message Authentication Codes}, booktitle = {Proceedings of EUROCRYPT 2013}, misc = {Full version available at \url{http://eprint.iacr.org/2012/606}}, year = {2013} }

[Zha12] 
How to Construct Quantum Random Functions
FOCS 2012
[PDF]
[ePrint]
[slides]
In the presence of a quantum adversary, there are two possible definitions of security for a
pseudorandom function. The first, which we call standardsecurity, allows the adversary to be
quantum, but requires queries to the function to be classical. The second, quantumsecurity, allows
the adversary to query the function on a quantum superposition of inputs, thereby giving the adversary
a superposition of the values of the function at many inputs at once. Existing proof techniques for
proving the security of pseudorandom functions fail when the adversary can make quantum queries. We
give the first quantumsecurity proofs for pseudorandom functions by showing that some classical
constructions of pseudorandom functions are quantumsecure. Namely, we show that the standard
constructions of pseudorandom functions from pseudorandom generators or pseudorandom synthesizers are
secure, even when the adversary can make quantum queries. We also show that a direct construction from
lattices is quantumsecure. To prove security, we develop new tools to prove the indistinguishability of
distributions under quantum queries.
In light of these positive results, one might hope that all
standardsecure pseudorandom functions are quantumsecure. To the contrary, we show a separation: under
the assumption that standardsecure pseudorandom functions exist, there are pseudorandom functions secure
against quantum adversaries making classical queries, but insecure once the adversary can make quantum queries.
@inproceedings{Zha12, author = {Mark Zhandry}, title = {How to Construct Quantum Random Functions}, booktitle = {Proceedings of FOCS 2012}, misc = {Full version available at \url{http://eprint.iacr.org/2012/182}}, year = {2012} }

[BDF^{+}11] 
Random Oracles in a Quantum World
ASIACRYPT 2011
[PDF]
[ePrint]
[slides]
The interest in postquantum cryptography — classical systems that remain secure in the presence of
a quantum adversary — has generated elegant proposals for new cryptosystems. Some of these systems are
set in the random oracle model and are proven secure relative to adversaries that have classical access to
the random oracle. We argue that to prove postquantum security one needs to prove security in the
quantumaccessible random oracle model where the adversary can query the random oracle with quantum
state.
We begin by separating the classical and quantumaccessible random oracle models by presenting
a scheme that is secure when the adversary is given classical access to the random oracle, but is insecure
when the adversary can make quantum oracle queries. We then set out to develop generic conditions under which
a classical random oracle proof implies security in the quantumaccessible random oracle model. We
introduce the concept of a historyfree reduction which is a category of classical random oracle
reductions that basically determine oracle answers independently of the history of previous queries, and we
prove that such reductions imply security in the quantum model. We then show that certain postquantum
proposals, including ones based on lattices, can be proven secure using historyfree reductions and are
therefore postquantum secure. We conclude with a rich set of open problems in this area.
@inproceedings{BDFLSZ11, author = {Dan Boneh and Özgür Dagdelen and Marc Fischlin and Anja Lehmann and Christian Schaffner and Mark Zhandry}, title = {Random Oracles in a Quantum World}, booktitle = {Proceedings of ASIACRYPT 2011}, misc = {Full version available at \url{http://eprint.iacr.org/2010/428/}}, year = {2011} }
