Blockwise p-Tampering Attacks on Cryptographic Primitives, Extractors, and Learners

Austrin et al. [1 ] studied the notion of bitwise p-tampering attacks over randomized algorithms in which an efficient ‘virus’ gets to control each bit of the randomness with independent probability p in an online way. The work of [1 ] showed how to break

  • PDF / 535,263 Bytes
  • 35 Pages / 439.37 x 666.142 pts Page_size
  • 67 Downloads / 195 Views

DOWNLOAD

REPORT


Abstract. Austrin et al. [1] studied the notion of bitwise p-tampering attacks over randomized algorithms in which an efficient ‘virus’ gets to control each bit of the randomness with independent probability p in an online way. The work of [1] showed how to break certain ‘privacy primitives’ (e.g., encryption, commitments, etc.) through bitwise p-tampering, by giving a bitwise p-tampering biasing attack for increasing the average E[f (Un )] of any efficient function f : {0, 1}n → [−1, +1] by Ω(p · Var[f (Un )]). In this work, we revisit and extend the bitwise tampering model of [1] to blockwise setting, where blocks of randomness becomes tamperable with independent probability p. Our main result is an efficient blockwise p-tampering attack to bias the average E[f (X )] of any efficient function f mapping arbitrary X to [−1, +1] by Ω(p · Var[f (X )]) regardless of how X is partitioned into individually tamperable blocks X = (X1 , . . . , Xn ). Relying on previous works of [1, 19, 36], our main biasing attack immediately implies efficient attacks against the privacy primitives as well as seedless multi-source extractors, in a model where the attacker gets to tamper with each block (or source) of the randomness with independent probability p. Further, we show how to increase the classification error of deterministic learners in the so called ‘targeted poisoning’ attack model under Valiant’s adversarial noise. In this model, an attacker has a ‘target’ test data d in mind and wishes to increase the error of classifying d while she gets to tamper with each training example with independent probability p an in an online way.

1

Introduction

In this work, we study tampering attacks that efficiently manipulate the randomness of randomized algorithms with adversarial goals in mind. Tampering attacks could naturally be studied in the context of cryptographic algorithms that (wish to) access perfectly uniform and untampered randomness for sake of S. Mahloujifar—Supported by University of Virginia’s SEAS Research Innovation Award. M. Mahmoody—Supported by NSF CAREER award CCF-1350939, and University of Virginia’s SEAS Research Innovation Award. c International Association for Cryptologic Research 2017  Y. Kalai and L. Reyzin (Eds.): TCC 2017, Part II, LNCS 10678, pp. 245–279, 2017. https://doi.org/10.1007/978-3-319-70503-3_8

246

S. Mahloujifar and M. Mahmoody

achieving security. However, the scope of such attacks goes beyond the context of cryptography and could be studied more broadly for any class of algorithms that depend on some form of untampered random input and try to achieve specific goals (e.g., learning algorithms using untampered training data to generate a hypothesis). Here, we are interested in understanding the power and limitations of such tampering attacks over the randomness, when the adversary can tamper with, or even control, ≈ p fraction of the randomness.1 The most relevant to our study here is the work of Austrin et al. [1] that introduced the notion of bitwise p-tampering attacks on the randomness of cryptograp