The celebrated “hardness vs. randomness” paradigm lets us derandomize algorithms under the assumption that certain computational problems are hard to solve. Classical applications of this paradigm led to many exciting results in complexity theory, and in recent years we have been able to overcome several barriers of the original approach, using a host of new techniques.
In this talk I will survey a small selection of recent results in hardness vs. randomness. The common theme will be a close look at reconstructive pseudorandom generators, studying the complexity of reconstruction and the possibility of deterministic reconstructions.
The talk will be high-level and will aim to assume no prior knowledge.