A common characteristic of all ad-hoc networks is that of emergent properties. Intuitively, emergent properties are features that cannot be provided by individual network nodes themselves but instead result from interaction and collaboration among network nodes. In this talk, we present the salient characteristics of these properties and discuss their security implications. Several examples of emergent properties in sensor and ad-hoc networks are discussed including key connectivity, trust establishment, and node replica detection. We conclude with a common theme of current research in security of emergent properties, namely that of a new threat model whereby the adversary may adaptively compromise nodes of a network. We contrast this theme with that of past research that limits an adversary to "man-in-the-middle" attacks and relies exclusively on end-to-end security solutions.
When designing and verifying security protocols, a certain level of idealization is needed to provide manageable mathematical treatment. Accordingly, two views of cryptography have been developed over the years, by two mostly separate communities. In the first view, cryptographic protocols are expressed algebraically, within simple languages. This view is suitable for formal reasoning and automated tools, but also arguably too abstract. In the second view, cryptographic primitives are concrete algorithms that operate on bitstrings. This view involves probabilities and limits in computing power; it is harder to handle formally, especially when dealing with large protocols. Getting the best of both views is appealing, and is the subject of active research. In this work, we develop a first sound and complete implementation of a distributed process calculus. Our calculus provides secure primitives for messaging and authentication, but not explicit cryptography. It supports simple reasoning, based on labelled transitions and observational equivalence. We precisely describe its implementation in a computational setting, using abstract machines and standard cryptographic assumptions. We show that high level reasoning accounts for all low-level adversaries. We also discuss the resulting constraints on high-level language design. Joint work with Cédric Fournet.
$r,m$-colorings of knots are the solutions of systems of linear equations derived by associating to each crossing of the knot diagram under study the equation \[m \times \mbox{ under-arc }+ (1-m) \times \mbox{ over-arc } = \text{ emerging under-arc }. \]This is an invariant of the knot. We elaborate on this and work out calculations for the so-called Pretzel knots. To Bernardo, on his 4th birthday.
Following the approach of cryptofibring to combining semantic presentations of logical systems, we propose herein a new logic that features both classical and intuitionistic implications. We explore some of the properties of this logic and prove its completeness with respect to a Kripke-style semantics, where classical implication has a curious interpretation. Joint work with J. Ramos.
This talk discusses a variation of the fibred semantics of D. Gabbay called plain fibring, introduced with the aim of combining logics given by matrix semantics. It is proved that any logic obtained by plain fibring is a conservative extension of the original logics. It is also proposed a simpler version of plain fibring of matrix logics called direct union. This technique is applied to the study of the class of fuzzy logics defined by $t$-norms.
I start by presenting the problem dealt with, what is named entity recognition (NER) and why it is relevant, then I describe what we did to evaluate the systems which do NER: the HAREM contest. In this connection, I will stress what was new compared to other evaluation contests and will put the emphasis on methodological questions related to natural language processing (NLP). Several of the the evaluation choices will be discussed and motivated. In the talk, I also intend to give a brief overview of Linguateca and of the evaluation contest paradigm in NLP, and end with a discussion of remaining challenges (in NER and evaluation contests in general). HAREM stands for "HAREM - Avaliação de Reconhecimento de Entidades Mencionadas".
The real Turing machine is a computational model that has been proposed by Blum, Shub and Smale to model computations on real numbers. This model allows to understand complexity of problems over the real numbers in terms of the number of algebraic operations required to solve them, independently from machine representations. Complexity classes like P, NP, ... with complete problems, can be defined in this model. We will present several results that show that complexity classes can be characterized algebraically in this model. These results, which are true on any logical structure, extend known characterizations in classical complexity, in the spirit of the first characterization of polynomial time obtained in 1992 by Bellantoni and Cook. Joint Work with F. Cucker, P. Jacobé de Naurois, J. Y. Marion (PhD Thesis of P. Jacobé de Naurois).
Web services are an important series of industry standards for adding semantics to web-based and XML-based communication, in particular among enterprises. Like the entire series, the security standards and proposals are highly modular. Combinations of several standards are put together for testing as interoperability scenarios, and these scenarios are likely to evolve into industry best practices. In the terminology of security research, the interoperability scenarios correspond to security protocols. Hence, it is desirable to analyze them for security. In this work, we analyze the security of the new Secure WS-ReliableMessaging Scenario, the first scenario to combine security elements with elements of another quality-of-service standard. We do this both symbolically and cryptographically. The results of both analyses are positive. Joint work with Michael Backes, Sebastian Moedersheim, Birgit Pfitzmann.
The exogenous semantic approach for enriching a given base logic was introduced by P. Mateus e A. Sernadas in order to obtain an exogenous quantum logic that extends classical propositional logic. Along this enrichment, we can distinguish two intermediate steps, globalization and probabilization. In this work we will focus on presenting globalization as a mechanism for enriching a given base logic, using the key idea of taking, as global models, sets of models of the base logic. We will then study preservation results, such as completeness and algebraizability. Joint work with Carlos Caleiro.
Large vocabulary continuous speech recognition is a very hard task, which is the reason why state of the art systems use multiple sources of linguistic knowledge to better solve the problem. Those knowledge sources refer to various modelling levels, such as the acoustic, phonetic, lexical and syntactic levels. Due to the diversity of knowledge sources which must be integrated, algorithms for recognizing large vocabulary continuous speech are traditionally very complex. The integration of new knowledge sources can be problematic, since each new knowledge source adds to the complexity of the recognizing algorithm. The INESC-ID speech recognition system addresses this complexity through the use of Weighted Finite-state Transducers (WFST). WFSTs are finite-state machines that allow modelling weights relations and provide a unifying formalism for speech recognition. In this talk, the INESC-ID WFST approach to automatic speech recognition will be presented. A state of the art system for transcribing broadcast news in European Portuguese will be demonstrated. It will also be shown how WFST techniques are currently being applied to speech-to-speech machine translation.
A number of problems associated with the spatial structure in the design of networks of protected areas are not trivial, and solutions are not readily approximated intuitively. One such issue is that of identifying networks capturing the regional species assemblage, where some level of adjacency or connectivity between areas is achieved. These spatial issues fit within the framework of graph theory. I will address three different problems concerning connectivity in the design of protected areas for conservation. This work has been carried out with Diogo Alagador, Kevin Gaston and Leonor Pinto.
The Church-Fitch paradox shows that "If there is some true proposition which nobody knows (or has known or will know) to be true, then there is a true proposition which nobody can know to be true." (Fitch, F. A Logical Analysis of Some Value Concepts. Journal of Symbolic Logic, 1963.) Indeed, the paradox shows that the verification principle "All true propositions can be known" implies the collapse principle "All true propositions are known." In my talk, I argue that the correct set to state the paradox is composed syntactically by a fusion of modal languages and logics and semantically by a product of Kripke models. I also argue that if we replace "possibility" and "knowledge" by the hybrid modality of "knowability" obtained by introducing in the language of the paradox a new operator $\boxminus$, then we can state the verification principle without leading to the collapse principle.
Under a standard hardness assumption we exactly characterize the worst-case running time of languages that are in average polynomial-time over all polynomial-time samplable distributions. More precisely we show that if exponential time is not in subexponential space, then the following are equivalent for any algorithm $A$: for all $p$-samplable distributions $\mu$, $A$ runs in time polynomial on $\mu$-average; for all polynomial $p$, the running time for A is bounded by $2^{O(K^p(x)-K(x)+\log|x|)}$ for "all" inputs $x$. To prove this result we explore the time-bounded Kolmogorov distribution, $m^t(x)=2^{-K^t(x)}$ where $K^t(x)$ is the Kolmogorov complexity (smallest program size to generate $x$) with programs limited to run in time $t(|x|)$ and show that under the hardness assumption, the time-bounded Kolmogorov distribution is a universal samplable distribution.
Campagnolo, Costa, and Moore found connections between discrete complexity classes and analog classes defined on the reals. Building on these ideas Bournez and Hairney found function algebra descriptions of classes of functions defined via real analysis. We develop a general collection of tools which allow us to nicely compare different classes of functions via a notion of "approximation". We apply these general tools to obtain some previous results and prove new ones. Joint work with M. Campagnolo.
Emil Post has shown that classical propositional logic (CPL) is functionally complete, i.e., all connectives definable by binary truth-functions are definable in CPL. Moreover, he has shown that CPL is maximal in the sense that its connectives cannot be strengthened. Some philosophers, like Quine, have used such results to support the claim that CPL is the best logic in the world. However, many connectives, truth-functional or not, cannot be defined in CPL, and CPL can be extended in many different ways. For example, modal logic S5 is a strict conservative extension of CPL. One can wonder if there are logics which are absolutely maximal in the sense that they cannot be extended in any non trivial way. In this talk I will define the concept of absolute maximality using concepts from the theory of combination of logics and universal logic, and show, using a technique inspired by the Lindenbaum lemma, that any logic can be extended into an absolute maximal one.
We address the issue of confidentiality in a language-based security approach. We study the design of flexible information flow policies that generalize non-interference, the *non-disclosure* policies. Such policies can be statically enforced on concurrent languages by means of refined type and effect systems. Non-disclosure controls information flow in programs that include declassification, which we express by means of a *flow declaration* construct that implements a local information flow policy. We generalize our approach to tackle the largely unexplored topic of controlling information flow in a global computing setting. New forms of security leaks that are introduced by code mobility are revealed.
We consider the problem of inferring the structure of a network from co-occurrence data; observations that indicate which nodes occur in a signaling pathway but do not directly reveal node order within the pathway. This problem is motivated by network inference problems arising in computational biology and communication systems, in which it is difficult or impossible to obtain precise time ordering information. Without order information, every permutation of the activated nodes leads to a different feasible solution, resulting in combinatorial explosion of the feasible set. However, physical principles underlying most networked systems suggest that not all feasible solutions are equally likely. Intuitively, nodes which co-occur more frequently are probably more closely connected. Building on this intuition, we model path co-activations as randomly shuffled samples of a random walk on the network. We derive a computationally efficient network inference algorithm and, via novel concentration inequalities for importance sampling estimators, prove that a polynomial complexity Monte Carlo version of the algorithm converges with high probability. This is joint work with Michael Rabbat and Robert Nowak, from the University of Wisconsin, Madison, WI, USA.
We discuss the formulation of convex hulls of incidence vectors of strings in a language as systems of linear inequalities over the rationals. In particular, for a graph $(V,E)$, existing linear formulations for the minimum cut problem require $\Theta(|V||E|)$ variables and constraints and can be interpreted as a composition of $|V|-1$ polyhedra for minimum $s$-$t$ cuts in much the same way as early algorithmic approaches to finding globally minimum cuts relied on $|V|-1$ calls to a minimum $s$-$t$ cut algorithm. We present the first formulation to beat this bound, one that uses $O(|V|^2)$ variables and $O(|V|^3)$ constraints. Applications of our result include strong relaxations for the traveling salesman problem with fewer variables than previously known and a polynomial time verifiable certificate of size $n$ for the NP-complete problem of $l_1$-embeddability of a rational metric on an $n$-set (as opposed to one of size $n^2$ known previously).
In game-based cryptographic proofs, the security of cryptographic algorithms is reduced to problems which are assumed to be hard (e.g. the DDH). This requires reasoning about and transforming probabilistic programs. In this presentation a probabilistic Hoare-style logic is given and it is shown how this logic can be used in formalizing cryptographic proofs. This technique is illustrated by proving semantic security of ElGamal. Joint work with Ricardo Corin.
Algebraic fibring, that is, combining logic systems by means of colimits in appropriate categories of logics, was introduced by A. Sernadas and his collaborators in 1999. The preservation by fibring of metaproperties of logics, such as soundness, completeness and interpolation, is one of the basic concerns in the subject of combining logics. In this talk we address the preservation by fibring of the different degrees of algebraization of logic systems stated in the so-called Leibniz Hierarchy (cf. Font, Jansana and Pigozzi, 2003). Thus, we prove that several categories of logic systems in the Leibniz hierarchy have constrained and unconstrained fibring: Protolagebraic logics, equivalential logics and (Blok & Pigozzi) algebraizable logics. Finally, the question of combining algebraizable logics from the semantical point of view is also analyzed. We prove that the category of matrix semantics and the category of equivalent algebraic semantics have constrained and unconstrained fibring. This is a joint work with Victor Fernández (National University of San Juan, Argentina).
A new notion of deductive system is proposed in order to allow the development of a truly universal theory of fibring. After a brief review of the notions of 2-category, multicategory and polycategory, a deductive system is defined as a 3-multipolymulticategory. Objects are sorts, multimorphsims are language constructors, 2-polycells are judgments, and 3-multicells are derivations. The universality of the proposed concept is discussed. Fibring is introduced as a cocartesian lifting. Along the way, some examples are provided. Finally, pending problems are discussed. Joint ongoing work with Marcelo Coniglio, Cristina Sernadas and Luca Viganò.
Based on the exogenous semantics approach to enriching logics, previously used to develop the exogenous propositional quantum logic, a novel exogenous quantum first-order logic (EQFOL) is presented. Predicates are used to represent families of quantum bits. A quantum interpretation structure includes a superposition of classical valuations according to the principles of the exogenous semantics approach. But, a pointed variation is needed to cope with a modality introduced for stating properties of omitting symbols. An axiomatization is proposed. Some examples are provided. Finally, the envisaged completeness result is discussed. Joint work with Jaime Ramos, Paulo Mateus and Mingsheng Ying.
We introduce a simple programming language for iteration-free probabilistic sequential programs and endow it with a complete and decidable Hoare calculus. In contrast to the usual approach, our state logic has truth-functional rather than arithmetical propositional connectives. Joint work with R. Chadha, P. Mateus and A. Sernadas.
We will report on work-in-progress on developing a model-checking tool for quantum communication protocols and quantum cryptographic protocols. Our early experiments with probabilistic model checking will be reviewed; then the general model-checking problem for quantum protocols will be formalised and its complexity discussed. This will naturally lead to the question of which protocols can be efficiently simulated on a classical computer. An overview of the stabilizer formalism will be given, followed by a summary of the Aaronson-Gottesman algorithm for simulation of stabilizer circuits. We will present QMC, a tool for automatically verifying properties of such circuits; properties of protocols are expressed using a subset of EQPL, a logic developed by Mateus and Sernadas. This is joint work with Rajagopal Nagarajan (University of Warwick) and Simon Gay (University of Glasgow).
Following the failure of the previous attempt at axiomatizing the envisaged extension of FOL with almost-everywhere quantification endowed with measure-theoretic semantics, a two-sorted solution is presented. The additional sort is used to deal with measurable sets of individuals. The new axioms for dealing with the almost-everywhere quantification are discussed. The completeness of the new axiomatization is carried out using the Lindenbaum-Henkin technique. Joint work with Luís Cruz-Filipe, Cristina Sernadas and Amílcar Sernadas.