Skip to main content
×
×
Home
  • Print publication year: 2000
  • Online publication date: October 2013

9 - Pseudorandomness

Summary

Is we saw in Chapter 4, the difficulty of sampling geometric spaces directly reflects their discrepancy. In the presence of unbounded VC-dimension, we have no combinatorial structure to hang on to, and naive randomization is often the preferred route. The trouble is that the underlying probability spaces are usually of exponential size and straightforward derandomization is intractable. This chapter shows that, by sampling sparse low-discrepancy subsets of the probability spaces, we can often considerably reduce the amount of randomness needed. The connection is intuitively obvious: A low-discrepancy subset should be mostly indistinguishable from the whole set. So, by sampling from it we should be able to fool the casual observer into thinking that we are actually sampling from the whole set. Of course, “casual observer” is our euphemism for “polynomially bounded algorithm.” Thus, if this chapter needed a wordy subtitle, it could be: How designers of probabilistic algorithms can limit the amount of randomness they need through the judicious use of the discrepancy method.

Suppose that we wish to find a random sample S of size s in a universe with n elements. For concreteness, the universe can be thought of as {0, …, n − 1}. The quality of the random sample is measured by its discrepancy relative to any subset. In other words, imagine that we fix a certain F ⊆ {0, …, n − 1}.

Recommend this book

Email your librarian or administrator to recommend adding this book to your organisation's collection.

The Discrepancy Method
  • Online ISBN: 9780511626371
  • Book DOI: https://doi.org/10.1017/CBO9780511626371
Please enter your name
Please enter a valid email address
Who would you like to send this to *
×