2000 seminars


Room P3.10, Mathematics Building

João Ribeiro
João Ribeiro, Carnegie Mellon University

The mother of all leakages: How to simulate noisy leakages via bounded leakage (almost) for free.

The ubiquity of real-world side-channel attacks has led to the rise of leakage-resilient cryptography — cryptographic schemes which remain secure even when some side information is leaked from supposedly secret system components, such as private keys. Most works in leakage-resilient cryptography consider the Bounded Leakage Model, where one assumes that the side information leaked is bounded in length. However, leakage length is not a reliable estimate of leakage quality. For example, temperature or computation time measurements may require many more bits to be described than the private key under attack, but this does not necessarily mean that they fully reveal the key. Motivated by this, some works have considered the more general Noisy Leakage Model instead, where it is only required that the leakage is a sufficiently "noisy" version of the secret information, for various measures of "noise".

Given that bounded leakage is a (very special) sub-case of noisy leakage, in this talk we will be interested in the following question:

What does bounded leakage-resilience tell us about noisy leakage-resilience?

Surprisingly, we show that for common models of noisy leakage it is possible to simulate one query of noisy leakage using one query of bounded leakage with small error in the information-theoretic setting. In particular, this implies that cryptographic schemes secure against bounded leakage are also secure against noisy leakage with almost no loss in security. To complement the above, we show that our reductions are nearly-optimal.

Based on joint work with Gianluca Brian, Antonio Faonio, Maciej Obremski, Mark Simkin, Maciej Skórski, and Daniele Venturi. Available at https://eprint.iacr.org/2020/1246.


Room P3.10, Mathematics Building

Miguel Couceiro
Miguel Couceiro, Université de Lorraine

Galois theory of analogical classifiers

Analogical proportions are statements of the form "a is to b as c is to d", denoted a:b::c:d) where a, b, c, d are tuples of attribute values describing items. Analogical inference relies on the idea that if four instances a, b, c, d are in analogical proportion for most of the attributes describing them, then it may still be the case for the other attributes. Similarly, if class labels are known for a, b, c and unknown for d, then one may infer the label for d as a solution of an analogical proportion equation.

Theoretically, it is quite challenging to characterize situations where such an analogical inference principle (AIP) can be soundly applied. In case of Boolean attributes, a first step for explaining the analogical mechanism was to characterize the set of classifiers for which AIP is sound (i.e., no error occurs). At IJCAI 2017, we took the minimal model of analogy (i.e., containing only patterns of the form x : x :: y : y and x : y :: x : y) and showed that these analogical classifiers coincide with the set of affine Boolean functions. Moreover, when the function is close to being affine, we showed at IJCAI 2018 that the prediction accuracy remains high. These results were then extended at SUM 2020 to nominal domains.

However, the notion of analogy preservation gives rise to a Galois connection between classifiers and the models of analogy that they preserve. In this talk, we will explore this polarity and establish a Galois theory of analogical classifiers. We will also derive several consequences and, if time allows, we will discuss recent applications in NLP related tasks.

Most of the results that will be presented constitute ongoing research work being developed with Erkko Lehtonen, Esteban Marquer and Pierre-Alexandre Murena, Nicolas Hug, Henri Prade and Gilles Richard.


Room P5.18, Mathematics Building

Diogo Poças, Faculdade de Ciências de Universidade Lisboa

Existence and Complexity of Approximate Equilibria in Weighted Congestion Games

We study the existence of approximate pure Nash equilibria (PNE) in weighted atomic congestion games with polynomial cost functions of maximum degree $d$. Previously it was known that $d$-approximate equilibria always exist, while nonexistence was established only for small constants, namely for 1.153-PNE. We improve significantly upon this gap, proving that such games in general do not have $\sqrt{d}$-approximate PNE, which provides the first super-constant lower bound.

Furthermore, we provide a black-box gap-introducing method of combining such nonexistence results with a specific circuit gadget, in order to derive NP-completeness of the decision version of the problem. In particular, deploying this technique we are able to show that deciding whether a weighted congestion game has an $\sqrt{d}$-PNE is NP-complete. Previous hardness results were known only for the special case of exact equilibria and arbitrary cost functions. The circuit gadget is of independent interest and it allows us to also prove hardness for a variety of problems related to the complexity of PNE in congestion games.


Room P3.10, Mathematics Building

Luca Viganò, King's College, UK

Formal Methods for Socio-Technical Security (Formal and Automated Analysis of Security Ceremonies)

Software engineers and analysts traditionally focus on cyber systems as technical systems, which are built only from software processes, communication protocols, crypto algorithms, etc. They often neglect, or choose not, to consider the human user as a component of the system’s security as they lack the expertise to fully understand human factors and how they affect security. However, humans should not be designed out of the security loop. Instead, we must deal with security assurance as a true socio-technical problem rather than a mere technical one, and consider cyber systems as socio-technical systems with people at their hearts. The main goal of this talk is to advocate the use of formal methods to establish the security of socio-technical systems, and to discuss some of the most promising approaches, including those that I have helped develop. I will also discuss my recent work on “Cybersecurity Show and Tell”, namely how different kinds of artworks can be used to explain cybersecurity and how telling (i.e., explaining notions in a formal, technical way) can be paired with showing through visual storytelling or other forms of storytelling.


Room P5.18, Mathematics Building

Charles Bedárd
Charles Bedárd, Università della Svizzera Italiana, Switzerland

Emergence, Sophistication and Epistemology

A quantitative and objective notion of emergence is proposed, using algorithmic information theory as a basis for an objective framework in which a bit string encodes observational data. A plurality of drops in the Kolmogorov structure function of such a string is seen as the hallmark of emergence. The definition offers some theoretical results and extends the notions of coarse-graining and boundary conditions.

Sophistication and logical depth are two quantitative measures of the non-trivial organization of an object. Although apparently different, these measures have been proven equivalent (by Pr. André Souto and collaborators), when logical depth is renormalized by the busy beaver function. However, if the measures are relativized to auxiliary information y, it turns out that the ability of y to solve the halting problem introduces a distortion between Soph(x|y) and Depth(x|y).

Finally, I share my latest thoughts about epistemology (the philosophy of knowledge), more precisely, about the problem of making it rigorous. I explain the gap between Bayesian and Popperian epistemology and speculate on some possible resolutions through algorithmic information theory. Note that this problem is not 'merely' philosophical – it is the key problem that needs to be solved to have proper thinking machines.


Room P3.10, Mathematics Building

Jean-Yves Béziau
Jean-Yves Béziau, UFRJ, Brazil

Classical Propositional Logic without Atoms

Classical propositional logic (and other propositional logics) are generally presented with atomic formulas, due to a philosophical idea of Wittgenstein (1921), baptized by Bertrand Russell “Logical Atomism” (1924). This philosophical view is controversial and from a mathematical point of view it is possible to construct propositional logic without atoms. This is what we will show here in the case of classical propositional logic. Surprisingly enough this has not yet been studied in details. On the one hand Gödel very succinctly talked about that in a informal way (1929/1930). On the other hand Suszko developed what he called “abstract logic”, a general theory of propositional logics without the atomic assumption, but did not study in details particular cases.

We will here present a precise mathematical definition of classical propositional logic without atoms, present a semantics for it, a sequent calculus and prove the completeness theorem using a very general abstract version of this theorem (Beziau 2001).

References

  • J.-Y. Béziau, “Sequents and bivaluations”, Logique et Analyse, 44 (2001), pp.373-394.
  • K. Gödel, “Eine Eigenschaft der Realisierungen des Aussagenkalküls”, Ergebnisse eines mathematischen Kolloquiums, 2 (1929/30), pp.20-21.
  • B. Russell, “Logical Atomism”, in J. H. Muirhead (ed.), Contemporary British Philosophers, London: Allen and Unwin, 1924, pp.356–383.
  • R. Suszko (with S. L. Bloom and D. J. Brown) “A note on abstract logics”, Bulletin de l’Académie Polonaise des Sciences, 18 (1970), pp. 109-110.
  • L. Wittgenstein, “Logisch-Philosophische Abhandlung”, Annalen der Naturphilosophie, 14 (1921).


Room P5.18, Mathematics Building

M. Hamed Mohammady
M. Hamed Mohammady, Université Libre de Bruxelles

Quantum measurements constrained by the third law of thermodynamics

In the quantum regime, the third law of thermodynamics implies the unattainability of pure states. In this talk, I will first introduce an operational formulation of the third law as a constraint on the class of realisable channels, such that the availability of a channel not so constrained is both necessary and sufficient for the attainability of pure states. Subsequently, I will show how the third law thus construed limits measurements of general observables, or POVMs. In particular, it is shown that while the third law categorically precludes ideal or repeatable measurements, non-disturbing measurements can be achieved if the measured observable is sufficiently "unsharp".


Room P5.18, Mathematics Building

Chrysoula Vlachou
Chrysoula Vlachou, Instituto de Telecomunicações

Quantum weak coin flipping with arbitrary levels of security: A geometric and an algebraic analytical solution

Weak coin flipping (WCF) is a fundamental cryptographic primitive for two-party secure computation, where two distrustful parties need to remotely establish a shared random bit whilst having opposite preferred outcomes. It is the strongest known primitive with arbitrarily close to perfect security in the quantum case, while classically its security is completely compromised (unless one makes further assumptions, such as computational hardness).

The existence of almost perfectly secure quantum WCF protocols was established in the seminal work of C. Mochon in 2007, however, his proof was partially non-constructive, thus, setting back the proposal of explicit protocols. More recently, Arora, Roland and Weis designed an algorithm that numerically constructs unitary matrices corresponding to WCF protocols with arbitrarily small bias [STOC’19, p.205-216].

In this talk, we present new techniques which yield a fully analytical construction of WCF protocols with different levels of security including nearly perfect security, thus achieving a solution that has been missing for more than a decade. In particular, we study the quantum WCF problem from both an algebraic and a geometric perspective and propose two different solutions, respectively. Furthermore, our new techniques lead to a simplified proof of existence of WCF protocols by circumventing the non-constructive part of Mochon’s proof.