December 08

Add to Calendar 2017-12-08 13:00:00 2017-12-08 14:30:00 America/New_York Peter Byerley Rindal: Fast and Secure Private Set Intersection Abstract: Private Set Intersection (PSI) refers to the multi-party problem of identifying the common elements between two or more sets while hiding all other items. For example, identifying mutual friends without revealing all of the contacts to the other party. PSI has numerous applications including social networking, voter registration, web advertising, malware detection and many more. Over the last half decade, significant progress has been made towards practical two-party PSI protocols. This talk will present two of these improvements, 1) A malicious secure PSI protocol (CCS'17) capable of intersecting two sets of one million items in 12 seconds. 2) The first PSI protocol (CCS'17) built from fully homomorphic encryption which achieves sublinear communication and fast running-times. The latter approach takes just 3 seconds & 12MB to intersect 5 thousand elements with 16 million elements and requires minimal computation by the party with the smaller set. Due to the asymmetry in set sizes, this protocol is extremely well suited for many mobile applications. Recently, Signal, the popular secure messaging app, has expressed interest in PSI techniques to allow the contacts of their users to automatically be added to the Signal app without Signal learning the user's contacts. Our PSI protocol based on fully homomorphic encryption offers the first practical solution without the need for strong trusted hardware assumptions. G575
Add to Calendar 2017-12-08 10:30:00 2017-12-08 12:00:00 America/New_York Yilei Chen: Fiat-Shamir and Correlation Intractability from Strong KDM-Secure Encyrption Abstract:We construct simple functions that suffice for realizing the Fiat Shamir paradigm when applied to any interactive proof. More generally, our functions are correlation intractable w.r.t. all relations. Our construction relies on the existence of encryption schemes where polytime key-dependent-message attacks succeed only with exponentially small probability. We then provide parameter settings where the El-Gamal and Regev encryption schemes plausibly enjoy such level of security. Our analysis closely follows that of Kalai, Rothblum and Rothblum (Crypto 17), who obtain similar results based on strong variants of program obfuscation. Thus, our main contribution is removing the use of obfuscation from their construction. We also extend the notion of correlation intractability so as to capture the properties required from hash functions used for proof-of-work (as in Bitcoin). We discuss the applicability of our constructions and analyses in that regime. Joint work with Ran Canetti, Leonid Reyzin, and Ron D. Rothblum. Hewlett, G882

December 01

Add to Calendar 2017-12-01 10:30:00 2017-12-01 12:00:00 America/New_York Alex Lombardi: Anonymous IBE, Leakage Resilience and Circular Security from New Assumptions Abstract:In anonymous identity-based encryption (IBE), ciphertexts not only hide their corresponding messages, but also their target identity. We construct an anonymous IBE scheme based on the Computational Diffie-Hellman (CDH) assumption in general groups (and thus, as a special case, based on the hardness of factoring Blum integers).Our approach extends and refines the recent tree-based approach of Cho et al. (CRYPTO’17) and Döttling and Garg (CRYPTO’17). Whereas the tools underlying their approach do not seem to provide any form of anonymity, we introduce two new building blocks which we utilize for achieving anonymity: blind garbled circuits (which we construct based on any one-way function), and blind batch encryption (which we construct based on CDH).In this talk, we will define and explore the new building block “(blind) batch encryption”, including a proof that batch encryption implies a public-key encryption scheme that is both resilient to leakage of a (1-o(1))-fraction of its secret key, and KDM secure (or circular secure) with respect to all linear functions of its secret key. We will then discuss how to obtain (anonymous) IBE from (blind) batch encryption and the implications for IBE constructions and beyond.When combined with constructions of our base primitives from concrete assumptions, we obtain the following new results:1) Anonymous IBE from the CDH assumption2) IBE from the Learning Parity with Noise (LPN) assumption, albeit with very small noise rate3) Highly leakage resilient and KDM secure public key encryption from CDH and (low noise) LPNJoint work with Zvika Brakerski, Gil Segev, and Vinod Vaikuntanathan. Hewlett, G882

November 17

Add to Calendar 2017-11-17 10:30:00 2017-11-17 12:00:00 America/New_York Ran Cohen: Round-Preserving Parallel Composition of Probabilistic-Termination Cryptographic Protocols Abstract:Round complexity is an important benchmark for multiparty computation protocols (MPC). For several important MPC tasks, such as broadcast, (tight) lower bounds on the round complexity are known. However, some of these lower bounds can be circumvented when the termination round of every party is not a priori known, and simultaneous termination is not guaranteed. Protocols with this property are called probabilistic-termination (PT) protocols. Running PT protocols in parallel affects the round complexity of the resulting protocol in somewhat unexpected ways. For instance, an execution of $m$ protocols with constant expected round complexity might take $O(\log m)$ rounds to complete. In a seminal work, Ben-Or and El-Yaniv (Distributed Computing '03) developed a technique for parallel execution of arbitrarily many broadcast protocols, while preserving expected round complexity. More recently, Cohen et al. (CRYPTO '16) devised a framework for universal composition of PT protocols, and provided the first composable parallel-broadcast protocol with a simulation-based proof. These constructions crucially rely on the fact that broadcast is "privacy free," and do not generalize to arbitrary protocols in a straightforward way. This raises the question of whether it is possible to execute arbitrary PT protocols in parallel, without increasing the round complexity. In this work we tackle this question and provide both feasibility and infeasibility results. We construct a round-preserving protocol compiler, tolerating any dishonest minority of actively corrupted parties, that compiles arbitrary protocols into a protocol realizing their parallel composition, while having a black-box access to the underlying \emph{protocols}. Furthermore, we prove that the same cannot be achieved, using known techniques, given only black-box access to the \emph{functionalities} realized by the protocols, unless merely security against semi-honest corruptions is required. This is a joint work with Sandro Coretti, Juan Garay, and Vassilis Zikas. Hewlett G882

October 27

Add to Calendar 2017-10-27 10:30:00 2017-10-27 12:00:00 America/New_York Rafael Pass: Explorations into Algorithmic Fairness Abstract:Fairness in classification has become an increasingly relevant and controversial issue as computers replace humans in many of today’s classification tasks. In particular, following the work of Dwork et al (ITCS'12), a subject of much debate is that of finding suitable definitions of fairness in an algorithmic context. In this work in progress, we explore such algorithmic notions of fairness.We first introduce new definitions that formalize two of the most notable recent definitions of fairness in classification. Roughly speaking, *fair treatment* formalizes the intuition that “equal” individuals (i.e. individuals having the same class) from different groups (e.g. races) are similarly treated, whereas *fair predictivity* formalizes the intuition that the classifier’s accuracy is similar among the different groups. Our first main result---which strengthens earlier results by Chouldechova (FATML’16) and Kleinberg et al. (ITCS’17)---shows that these two notions of fairness are largely incompatible.Consequently, we need to give up on acheiving one of them, and we focus our attention on acheiving fair treatment. Our second main result shows how to take any (possibly unfair) classifier C over a finite outcome space, and transform it---by just perturbing the output of C---according to some distribution learned by just having black-box access to samples of labeled, and previously classified, data, into a "near-optimal" classifier C? satisfying fair treatment. Our approach relies on, and generalizes, an LP-based characterization of fair treatment due to Hardt et al (NIPS'16).Joint work with Andrew Morgan Hewlett, G882

October 13

Add to Calendar 2017-10-13 10:30:00 2017-10-13 12:00:00 America/New_York Watermarking Cryptographic Functionalities from Standard Lattice Assumptions Abstract: A software watermarking scheme allows one to embed a "mark" into a programwithout significantly altering the behavior of the program. Moreover, it shouldbe difficult to remove the watermark without destroying the functionality of theprogram. Recently, Cohen et al. (STOC 2016) and Boneh et al. (PKC 2017) showedhow to watermark cryptographic functions such as PRFs against arbitrary removalstrategies using indistinguishability obfuscation. A natural question is whetherwe can build watermarking schemes from standard assumptions. In this talk, I will present a watermarkable family of PRFs that are secureagainst arbitrary removal strategies from only standard lattice assumptions.Along the way, I will introduce a new intermediate primitive called atranslucent PRF and show that it can be used to construct a watermarkable familyof PRFs. Finally, I will present our construction of translucent PRF using only standardlattice assumptions. This is joint work with David J. Wu 32-G882

October 06

Add to Calendar 2017-10-06 10:30:00 2017-10-06 12:00:00 America/New_York A New Approach to Round-Optimal Secure Multiparty Computation Abstract: We present a new approach towards constructing round-optimal secure multiparty computation (MPC) protocols against malicious adversaries without trusted setup assumptions. Our approach builds on ideas previously developed in the context of covert multiparty computation [Chandran et al., FOCS’07] even though we do not seek covert security. Using our new approach, we obtain the following results: 1) A five round MPC protocol based on the Decisional Diffie-Hellman (DDH) assumption.2) A four round MPC protocol based on one-way permutations and sub-exponentially secure DDH. This result is optimal in the number of rounds. Previously, no four-round MPC protocol for general functions was known and five-round protocols were only known based on indistinguishability obfuscation (and some additional assumptions) [Garg et al., EUROCRYPT’16].Joint work with Arka Rai Choudhuri (JHU) and Abhishek Jain (JHU) 32-G882

September 29

Add to Calendar 2017-09-29 10:30:00 2017-09-29 12:00:00 America/New_York Towards Breaking the Exponential Barrier for General Secret Sharing Abstract: A non-monotone secret-sharing scheme for a Boolean (access) function F: {0,1}^n to {0,1} is a randomized algorithm that on input a secret, output n pairs of shares s_{1,0},s_{1,1},...,s_{n,0},s_{n,1} such that for any x_1,...,x_n, the n shares s_{1,x_1},...,s_{n,x_n} determine the secret if F(x_1,...,x_n)=1 and reveal nothing about the secret otherwise. Standard monotone secret-sharing correspond to the special case where F is monotone and s_{1,0} = ... = s_{n,0} = \bot.The best secret sharing schemes for general functions (in both the monotone and non-monotone case) have shares of size \Omega(2^n). It has long been conjectured that one cannot do much better, namely, that there are access functions for which any secret sharing scheme necessarily has shares of size 2^{\Omega(n)}.In this work, we show non-monotone secret sharing schemes for every access function F with shares of size 2^{O(\sqrt(n) log n)}, disproving the conjecture for non-monotone secret sharing schemes. Our technique uses a new notion of decomposable matching vector (DMV) families as well as new constructions of DMV families. Matching vector families are combinatorial objects that have previously been used in circuit complexity, in construction of Ramsey graphs and in private information retrieval (PIR) protocols.Along the way, we also construct the first two-party and multi-party conditional disclosure of secrets (CDS) protocols for general functions with communication complexity 2^{O(\sqrt(n) log n)}.Joint works with Vinod Vaikuntanathan and Hoeteck Wee. 32-G882

September 22

Add to Calendar 2017-09-22 10:30:00 2017-09-22 12:00:00 America/New_York From Laconic Zero Knowledge to Public Key Cryptography Abstract: Since its inception, public-key encryption (PKE) has been a fundamental cryptographic primitive. Despite its significance, constructions of PKE are only known from a handful of specific intractability assumptions. In this work, we construct a PKE scheme from a natural complexity-theoretic assumption. Specifically, we construct PKE assuming the hardness of a language in NP that with an honest-verifier statistical zero knowledge (SZK) argument-system in which the honest prover is efficient and laconic. That is, messages that the prover sends are efficiently computable (given the NP witness) and the prover's total communication is short (i.e., of sufficiently sub-logarithmic length). This gives us a common framework capturing assumptions known to imply PKE. Joint work with Itay Berman, Ron Rothblum and Prashant Nalini Vasudevan. 32-G882

September 15

Add to Calendar 2017-09-15 10:30:00 2017-09-15 12:00:00 America/New_York Adi Akavia: Secure search in the cloud: homomorphic encryption meets coresets Abstract Secure Search is the problem of retrieving from a database table (or any unsorted array) the records matching specified attributes, as in SQL SELECT queries, but where the database and the query are encrypted. Secure search has been the leading example for practical applications of Fully Homomorphic Encryption (FHE) already in Gentry's seminal work in 2009; however, to the best of our knowledge all state-of-the-art secure search algorithms to date are realized by a polynomial of degree $\Omega(m)$ for $m$ the number of records, which is typically too slow in practice even for moderate size $m$. In this work we present the first algorithm for secure search that is realized by a polynomial of degree polynomial in $\log m$. We implemented our algorithm in an open source library based on HELib implementation for the BGV's FHE scheme, and ran experiments on Amazon's EC2 cloud. Our experiments show that we can retrieve the first match in a database of millions of entries in less than an hour; moreover, for a bounded number of matches (e.g., up to 40 matches) we can retrieve all matches in a rate of billions of entries per machine per minute. We achieve our results via a novel paradigm that forges a link between cryptography and modern data reduction techniques known as coresets and sketches that turn very large databases into very small ones. The key idea is that even if we don't know how to efficiently answer a query securely on the cloud, it may suffice to compute only its corresponding coreset. Since the coreset is small, the client can quickly decode the desired answer after decrypting the coreset. As a central tool we design a novel sketch that returns the first strictly-positive entry in a (not necessarily sparse) array of non-negative reals; this sketch may be of independent interest. Joint work with Dan Feldman and Hayim Shaul Hewlett G882

July 28

Add to Calendar 2017-07-28 10:30:00 2017-07-28 12:00:00 America/New_York Why We Rewind Abstract: Why do cryptographers who work on protocols keep talking about rewinding? Are they all stuck in the past? Why do we need to rewind anyway, and what does this have to do with obfuscation?In this talk, we'll discuss the mysteries of rewinding. We will also discuss how new techniques, such as super-polynomial strong simulation, can help us use rewinding to avoid rewinding. Along the way, we'll see why this is useful for reducing interaction and achieving composable security.The new works discussed in this talk are joint with Dakshita Khurana, and with Saikrishna Badrinarayan, Vipul Goyal, Abhishek Jain, and Dakshita Khurana. 32-G882

June 16

Add to Calendar 2017-06-16 10:30:00 2017-06-16 12:00:00 America/New_York Dakshita Khurana: How to Achieve Non-malleability in One or Two Rounds, or, A Knowledge Extraction Technique for Two Round Protocols Abstract: Knowledge extraction is an important technique central to several cryptographic protocols. Usually, protocols requiring knowledge extraction based on standard assumptions require at least three rounds of interaction. In this talk, I will describe a new black-box knowledge extraction technique for two round protocols that only relies on sub-exponential DDH or QR or Nth residuosity. We use this extraction technique to obtain new protocols in the realm of non-malleable commitments and zero-knowledge, that were believed to be impossible so far.- We obtain the first constructions of two-message non-malleable commitments satisfying the strong definition of non-malleability with respect to commitment.- We also obtain one-round non-malleable commitments with respect to opening, which we use to obtain simple constructions of two round multi-party coin-tossing with simultaneous messages.- We also construct two-message zero-knowledge with strong super-polynomial simulation. In particular, our protocol has a uniform simulator that runs in time less than the quality of zero-knowledge. Joint work with Amit Sahai. Eprint: https://eprint.iacr.org/2017/291.pdf Hewlett, G882

May 26

Add to Calendar 2017-05-26 10:30:00 2017-05-26 12:00:00 America/New_York On the Quantitative Hardness of CVP Abstract: For odd integers p >= 1 (and p=\infty), we show that the Closest Vector Problem in the \ell_p norm (CVP_p) over rank n lattices cannot be solved in 2^{(1?\eps)n} time for any constant \eps > 0 unless the Strong Exponential Time Hypothesis (SETH) fails. We then extend this result to “almost all” values of p>=1, not including the even integers. This comes tantalizingly close to settling the quantitative time complexity of the important special case of CVP_2 (i.e., CVP in the Euclidean norm), for which a 2^{n+o(n)}-time algorithm is known. In particular, our result applies for any p=p(n) =/= 2 that approaches 2 as n goes to infinity. Based on joint work with Huck Bennett and Sasha Golovnev. 32-G882

April 28

Add to Calendar 2017-04-28 10:30:00 2017-04-28 12:00:00 America/New_York Jonathan Ullman: Tight Lower Bounds for Differentially Private Selection Abstract: A pervasive task in the differential privacy literature is to select the k items of "highest quality" out of a set of d items, where the quality of each item depends on a sensitive dataset that must be protected. Variants of this task arise naturally in fundamental problems like feature selection and hypothesis testing, and also as subroutines for many sophisticated differentially private algorithms.The standard approaches to these tasks---repeated use of the exponential mechanism or the sparse vector technique---approximately solve this problem given a dataset of n = O(sqrt(k)log(d)) samples. We provide a tight lower bound for some very simple variants of the private selection problem. Our lower bound shows that a sample of size n = Omega(sqrt(k)log(d)) is required even to achieve a very minimal accuracy guarantee.Our results are based on an extension of the fingerprinting method to sparse selection problems. Previously, the fingerprinting method has been used to provide tight lower bounds for answering an entire set of d queries, but often only some much smaller set of k queries are relevant. Our extension allows us to prove lower bounds that depend on both the number of relevant queries and the total number of queries.Joint work with Thomas Steinke. Hewlett, G882

April 21

Add to Calendar 2017-04-21 10:30:00 2017-04-21 12:00:00 America/New_York Sunoo Park: New Constructions of Non-Malleable Commitments and Applications Abstract: Non-malleable commitments (NMC) are fundamental cryptographicprimitives. Since their conception by Dolev, Dwork and Naor (STOC 91),numerous applications of NMC have been discovered includinground-efficient MPC. Recently Goyal, Pandey and Richelson (STOC 16)constructed round optimal non-malleable commitments using a connectionto split-state non-malleable codes. However, this connection was notgeneral; it relied on specific properties of the inner-product code ofAggarwal et al. (STOC 14). As such, the resulting non-malleablecommitments suffer from high communication complexity and are not secureagainst an adversary who launches a concurrent attack. Both drawbackssignificantly limit the utility of the construction of Goyal et al.Recently there has been exciting new constructions of two-sourcenon-malleable extractors; objects which are known to imply split-statenon-malleable codes. Chattopadhyay, Goyal and Li (STOC 16) and Li (ECCC16) construct two-source non-malleable extractors which providesignificant improvements to the concurrent security and rate of thecodes of Aggarwal et al.In this work we revisit the construction of Goyal et al. and make threecontributions:* We identify a natural property of a strong two-source non-malleableextractor called efficient conditional preimage sampling which makes itsufficient for round optimal non-malleable commitments. Thus we makethe compiler of Goyal et al. more modular, so new advances innon-malleable extractors will lead to advances in non-malleable commitments.* We prove that the recent non-malleable extractor of Li supportsefficient conditional preimage sampling and so get a new construction ofnon-malleable commitments. The basic version of this construction,which is non-malleable against a synchronizing adversary, has only threerounds of interaction and has quasi-optimal communication complexity.This improves drastically upon all previous schemes for non-malleablecommitment.* We prove also that the non-malleable extractor of Chattopadhyay et al.supports efficient conditional preimage sampling, and so get another newconstruction of round-optimal non-malleable commitment. This time ourscheme satisfies a weaker notion of non-malleability (known asnon-malleability wrt replacement) in the much more demanding boundedconcurrency setting. We then show that this weaker notion suffices forthe main application of NMC to round-efficient MPC by giving a (fourround) round-optimal multi-party coin flipping protocol. Using thecompiler of Garg, Mukherjee, Pandey and Polychroniadou (EUROCRYPT 16) weobtain several round efficient protocols for general MPC based onvarious assumptions. Previous round optimal coin flipping protocolsrelied on the non-standard assumption that adaptive one-way functions exist.Joint work with Vipul Goyal, Ashutosh Kumar, Silas Richelson, andAkshayaram Srinivasan. Hewlett, G882

April 14

Add to Calendar 2017-04-14 10:30:00 2017-04-14 12:00:00 America/New_York Nir Bitansky: Verifiable Random Functions from Non-Interactive Witness-Indistinguishable Proofs Abstract: Verifiable random functions (VRFs), introduced by Micali, Rabin, and Vadhan (FOCS 99), are pseudorandom functions where the owner of the seed, in addition to computing the function's value at any point, can also generate a non-interactive proof that the value is correct, without compromising pseudorandomness at other points. Being a natural primitive with a wide range of applications, considerable efforts have been directed towards the construction of such VRFs. While these efforts have resulted in a variety of algebraic constructions (from the RSA problem or bilinear maps), the relation between VRFs and other general primitives is still not well understood. We present new constructions of VRFs from general primitives, the main one being non-interactive witness-indistinguishable proofs (NIWIs).This includes:- A selectively-secure VRF assuming NIWIs and non-interactive commitments. As usual, the VRF can be made adaptively-secure assuming subexponential hardness of the underlying primitives.- An adaptively-secure VRF assuming (polynomially-hard) NIWIs, noninteractive commitments, and {\em (single-key) constrained pseudorandom functions} for a restricted class of constraints. The above primitives can be instantiated under various standard assumptions, which yields corresponding VRF instantiations, under different assumptions than were known so far. One notable example is a non-uniform construction of VRFs from subexponentially-hard trapdoor permutations, or more generally, from verifiable pseudorandom generators. This partially answers an open question by Dwork and Naor (FOCS '00).The construction and its analysis are quite simple. Both draw from ideas commonly used in the context of indistinguishability obfuscation. Hewlett, G882

April 07

Add to Calendar 2017-04-07 10:30:00 2017-04-07 12:00:00 America/New_York Aloni Cohen: Cryptography with Updates Abstract: Starting with the work of Bellare, Goldreich and Goldwasser, a rich line of work has studied the design of updatable cryptographic primitives. For example, in an updatable signature scheme, it is possible to efficiently transform a signature over a message into a signature over a related message without recomputing a fresh signature. In this work, we continue this line of research, and perform a systematic study of updatable cryptography. We take a unified approach towards adding updatability features to recently studied cryptographic objects such as attribute-based encryption, functional encryption, witness encryption, indistinguishability obfuscation, and many others that support non-interactive computation over inputs. We, in fact, go further and extend our approach to classical protocols such as zero-knowledge proofs and secure multiparty computation.To accomplish this goal, we introduce a new notion of {\em updatable randomized encodings} that extends the standard notion of randomized encodings to incorporate updatability features. We show that updatable randomized encodings can be used to generically transform cryptographic primitives to their updatable counterparts.We provide various definitions and constructions of updatable randomized encodings based on varying assumptions, ranging from one-way functions to compact functional encryption. Joint work with Prabhanjan Ananth and Abhishek Jain Hewlett G882

March 24

Add to Calendar 2017-03-24 10:30:00 2017-03-24 12:00:00 America/New_York From Algebraic Complexity to Zero Knowledge Protocols Abstract: We present new techniques for achieving unconditional zero knowledge within models that combine probabilistic checking and interaction. Our techniques establish novel connections to algebraic complexity, and enable us to obtain natural zero-knowledge analogues of classical PCP and IP protocols. Our constructions require only simple and cheap modifications to the verifier of the original (non-zero-knowledge) protocol. 32-G882

March 17

Add to Calendar 2017-03-17 10:30:00 2017-03-17 12:00:00 America/New_York Itay Berman: Zero-Knowledge Proofs of Proximity Abstract:Interactive proofs of proximity (Ergun, Kumar and Rubinfeld, Information & Computation, 2004 and Rothblum, Vadhan and Wigderson, STOC 2013), or IPPs, are interactive proofs in which the verifier runs in time sub-linear in the input's length. Since the verifier cannot even read the entire input, following the property testing literature, the requirement is that she accepts inputs that are in the language and rejects ones that are far from the language. However, these proofs could (and in many cases, do) betray considerable global information about the input to the verifier.In this work, we initiate the study of zero-knowledge proofs of proximity (ZKPP). A ZKPP convinces a sub-linear time verifier while ensuring that she learns nothing more than a few locations of the input (and the fact that the input is "close" to the language).Our main focus is the setting of statistical zero-knowledge where we show that the following hold unconditionally (where N denotes the input size): - Statistical ZKPPs can be sub-exponentially more efficient than property testers (or even non-interactive IPPs): We show a natural property which has a statistical ZKPP with a polylog(N) time verifier, but requires Omega(sqrt(N)) queries (and hence also runtime) for every property tester. - Statistical ZKPPs can be sub-exponentially less efficient than IPPs: We show a property which has an IPP with a polylog(N) time verifier, but cannot have an SZKPP with even an N^(o(1)) time verifier. - Statistical ZKPPs for some graph-based properties such as promise versions of expansion and bipartiteness.Lastly, we also consider the computational setting where we show that: - Assuming the existence of one-way functions, every language computable either in (logspace uniform) NC or in SC, has a computational zero-knowledge proof of proximity with a (roughly) sqrt(N) time verifier. - Assuming the existence of collision-resistant hash functions, every language in NP has a statistical zero-knowledge argument of proximity with a polylog(N) verifier.Joinj work with Ron D. Rothblum and Vinod Vaikuntanathan. Hewlett, G882