–
Room P3.10, Mathematics Building
Nicolas Resch, University of Amsterdam
List-decodable/-recoverable codes in the zero-rate regime
A classical result of Plotkin (IRE Transactions on Information Theory, 1960) establishes that over binary alphabets, positive rate codes cannot have minimum distance greater than 1/2, and thus can uniquely correct at most a 1/4 fraction of bit-flip errors. It is additionally known that if one insists on constructing a code correcting a 1/4+ε fraction of errors (for small ε>0), then this code can have size at most O(1/ε), and that this is tight.
If one moves to list-decoding binary codes with list-size L — that is, the decoder may output up to L guesses for the transmitted message, as long as one of the guesses is correct — Blinovsky (Problems of Information Transmission, 1986) computed a similar threshold p_L. Additionally, his argument establishes that any (p_L+ε, L)-list-decodable code must have size at most O_{ε,L}(1) — a constant, but with a (massive) dependence on ε. Later, Alon, Bukh and Polyanskiy (IEEE Transactions on Information Theory, 2018) showed that for odd L, such codes have size O_L(1/ε) (as with Plotkin’s bound), but already with L = 2 such codes of size Ω(1/ε^{3/2}) exist.
In this talk, we will generalize all of these results to any (constant) alphabet size q > 2. A crucial tool in the proof is the concept of Schur convexity, which in certain cases allows one to show that the optimizing value for a function on a space of distributions is the uniform distribution.