2000 seminars


Room P3.10, Mathematics Building

Pedro Baltazar, SQIG - Instituto de Telecomunicações

Linearly Refined Session Types

Session types capture precise protocol structure in concurrent programming, but the types remain coarse-grained in the sense that we cannot specify properties of the exchanged values beyond their basic type. Refinement types address exactly this short-coming, combining types with logical formulae that constrain the set of values of a given type, offering a more fine-grained yet logically clear specification. We present a session discipline that utilises refinement formulae written in a fragment of Multiplicative Linear Logic, expressing not only the intended specification on message types but also finer properties arising from the treatment of refinements as logical resources. [Joint work with Vasco T. Vasconcelos and Dimitris Mostrous, to be presented at LINEARITY 2012.]


Room P3.10, Mathematics Building

Specifying and reasoning about normative systems in deontic logicprogramming

Normative systems have been advocated as an effective tool to regulate interaction in multi-agent systems. The use of deontic operators and the ability to represent defeasible information are known to be two fundamental ingredients to represent and reason about normative systems. In this talk I will introduce a framework that combines standard deontic logic (SDL) and non-monotonic logic programming, deontic logic programs (DLP), to represent and reason about normative systems. Besides having a rich language, DLPs have a simple and fully declarative semantics. In fact, a stable model like semantics can be defined for these programs and abduction can be used to allow agents to plan their interaction with the normative system. The fundamental problem of equivalence between normative systems is studied using a deontic extension of the so-called equilibrium logic. Furthermore, I will present a novel strong connection with the so-called Input-Output logic. Joint work with José Alferes.


Room P3.10, Mathematics Building

H. P. Sankappanavar, State University of New York, USA

Boole's Algebra of Logic. A Modern Version and A Modern Adaptation of the Original Version.

George Boole (1815-1864), an English mathematician, revolutionized logic in the 19th century by applying methods from the then emerging field of symbolical algebra to logic. The sophistication and mathematical depth of Boole's approach to the logic of classes is not commonly known. Boole's algebra of logic, however, was not perfect. It has received much criticism. Yet, the system seemed, by and large, to work just as Boole claimed it would. In this lecture, I will present

  1. the criticisms pointing to the weaknesses of Boole's Logic,
  2. a rigorous modern version of Boole's theorems, based in good part on Vols. I and II of Schroeder's Algebra der Logik, published in the 1890s, and
  3. a rigorous modern adaptation of Boole's original algebra of logic, based partly on the work of Hailperin (1976/1986) (time permitting).

The lecture will be mostly based on the recent (unpublished) joint work with Professor Stanley Burris, as well as on his 2010 article, George Boole, in Stanford Encyclopedia of Philosophy.


Room P3.10, Mathematics Building

André Souto, SQIG - Instituto de Telecomunicações

On the robustness of "useful" information measures

Given an individual object, how much information does that object contains? Is all that information useful? Kolmogorov complexity rigorously expresses the amount of information of a binary string $x$ by the length of the shortest program that given to a universal Turing machine is able to produce the string $x$. An incompressible string has high Kolmogorov complexity and thus has high information but from the computational complexity point of view it is not very useful, in the sense that, with high probability, one can produce another string as useful as the first one by flipping fair coins. So, how can one quantify, the subjective notion of useful information? There are two known approaches to achieve this goal based on Kolmogorov complexity: one measuring the amount of planing necessary to construct the object (static resources) and the second measuring the computational effort (dynamic resources) needed to produce the object. For the former one divide the smallest program producing the object into two parts: the part accounting for the useful regularities (useful information) and the part accounting for the remaining information present in the object so that the two-part description is as short as the shortest one-part description. The resulting measure is known as Sophistication. The latter approach is based on the time required to generate the object from any short description and the resulting measure is called logical depth. These two measures are formally defined with significance levels, i.e., with parameters that describe how far we are from the optimal solution, namely the Kolmogorov complexity of the object. In this talk I will explain how can we relate these two measures using the busy beaver function and prove that they both are not robust in the sense that, small changes in the significance levels can significantly change the values of the measures. This work was developed jointly with L. Antunes, B. Bauwens and A. Teixeira.


Room P3.10, Mathematics Building

Jaime Gaspar, Instituto de Telecomunicações

Gödel functional interpretation

The Gödel functional interpretation is a celebrated tool with a wide range of applications: consistency results, independence results, and extraction of computational content from proofs, just to name a few.

Financially supported by FCT, I.P., under grant SFRH/BD/36358/2007 co-funded by POPH / QREN / FSE.


Room P3.10, Mathematics Building

Maria João Gouveia, DM-UL / CAUL

Dualidades naturais em 50 minutos

A dualidade de Pontryagin, estabelecida em 1934 para a variedade dos grupos abelianos, e a dualidade de Stone, estabelecida em 1936 para a variedade das álgebras de Boole, são referências obrigatórias quando se remonta às origens da teoria das dualidades naturais. No entanto, é apenas nos anos 70 que surge um maior impulso, com o aparecimento da dualidade de Priestley, para os reticulados distributivos, e da dualidade de Hofmann-Mislove-Stralka para semi-reticulados. O trabalho de Davey e Werner, bem como o de Clark e Krauss, nos anos 80, abriram caminho ao desenvolvimento de toda uma teoria que foi conquistando o seu próprio espaço no seio da Álgebra. Neste seminário faremos uma breve incursão pela teoria das dualidades naturais, apresentando alguns dos resultados mais relevantes e referindo algumas das áreas de investigação, como por exemplo a das extensões canónicas de álgebras com reduto de reticulado, onde as dualidades e as suas técnicas são utilizadas.


Room P3.10, Mathematics Building

Manuel Campagnolo, ISA - TU Lisbon / SQIG - IT

A characterization of computable analysis on unbounded domains using differential equations

The functions of computable analysis are defined by enhancing normal Turing machines to deal with real number inputs. One can consider characterizations of these functions using function algebras, known as real recursive functions. Since there are numerous incompatible models of computation over the reals, it is interesting to find that the two very different models we consider can be set up to yield exactly the same functions. Bournez and Hainry used a function algebra to characterize computable analysis, restricted to the twice continuously differentiable functions with compact domains. Campagnolo and Ojakian found a different (and apparently more natural) function algebra that also yields computable analysis, with the same restriction. In this talk I will describe an improvement of earlier work, finding three function algebras characterizing computable analysis, removing the restriction to twice continuously differentiable functions and allowing unbounded domains. One of these function algebras is built upon the widely studied real primitive recursive functions. Furthermore, the proof uses the previously developed method of approximation, whose applicability is further evidenced by those results. This talk reports on joint work with Kerry Ojakian.


Room P3.10, Mathematics Building

David Henriques, CMU and SQIG-Instituto de Telecomunicações

Logical Analysis of Hybrid Systems

A cyber physical system is a system that features a tight coordination between its computational and physical components. They arise frequently in many application domains, including aviation, automotive, railway, and robotics. Because these systems operate in the physical world, stringent safety requirements are usually imposed on cyber-physical system designs. Hybrid systems model cyber-physical systems as dynamical systems with interacting discrete transitions (tipically the computational component) and continuous evolutions (typically the physical component). Historicaly, automatic verification is well developed for guaranteeing correct functioning of discrete systems (like computer programs) using logic and formal verification techniques, but what to do about hybrid systems? How can we ensure that cyber-physical systems are guaranteed to meet their design goals, e.g., that an aircraft following all protocols cannot crash into another one? This talk describes how logic and formal verification can be lifted to hybrid systems. It presents the theoretical and practical foundations of logical analysis of hybrid systems. The talk describes a logic for hybrid systems called differential dynamic logic and gives a compositional proof technique, complete relative to a first order theory for differential equations. This approach is implemented in the verification tool KeYmaera and has been used successfully for verifying nontrivial properties of aircraft, railway, and car control applications. The logical approach is also interesting from a theoretical perspective, because it shows how verification techniques for continuous dynamics can be lifted completely to hybrid systems. Work by André Platzer.


Room P3.10, Mathematics Building

Daniel Graça, U Algarve / SQIG-Instituto de Telecomunicações

Non-computability and financial markets

In this talk we will explore the valuation problem: given an asset, how much is it worth? Although financial theory has provided solutions to this problem (which will be surveyed on the talk) in practice the “fair” value of an asset is hard to calculate. Some explanations to this hardness include speculative, non-rational behavior of markets, lack of data, or uncertainty about the future. We will show that, in addition to the previous factors, inherent non-computable behavior may also contribute to the difficulties found when trying to valuate an asset. More concretely, we will show using the models of Financial Theory, that even in rational markets with complete knowledge (present or future) of all data affecting the price of the asset, the valuation problem is not computable.


Room P3.10, Mathematics Building

Fernando Ribeiro Correia, Escola Naval

Simple Ant Routing Algorithm (SARA): A routing protocol for mobile ad hoc networks

A Mobile Ad-hoc Network has limited and scarce resources and thus routing protocols in such environment must be kept as simple as possible.

The Simple Ant Routing Algorithm (SARA) offers a low overhead solution, by optimizing the routing process. Three complementary strategies were used in our approach. During the route discovery, was used a new broadcast mechanism, called the Controlled Neighbor Broadcast (CNB), in which each node broadcasts a control message (Forward ANT - FANT) to its neighbors, but only one of them broadcast this message again. During the route maintenance phase, we further reduce the overhead, by only using data packets to refresh the paths of active sessions. Finally, the route repair phase is also enhanced, by using a deep search procedure called Deep Search Area (DSA), as a way of restricting the number of nodes used to recover a route. Thus, instead of discovering a new path from the source to the destination, it is tried the discovery of a new path between the two end-nodes of the broken link. A broadest search is only executed when the deeper one fails to succeed. This procedure allows to recover part of the original route and create a network of alternative routes around the area where was detected the broken link.

SARA was simulated and tuned to present a optimal performance. Was also compared it with the classical approach of AODV and other biological routing approaches. The results achieved show that SARA offers the smallest overhead of all the protocols under evaluation and presents an overhead reduction of almost 25% of the value achieved by the other proposals. SARA also presents the best goodput, specially for TCP traffic, but it needs more time to discover the routes.

Through the study performed, it was also developed two extensions, written as generic as possible, to be implemented in different routing protocols. The proposed extensions allow in a simple way identify traffic flow variations and differentiate between temporary and permanent broken links, through the analysis of the link behavior historic.


Room P3.10, Mathematics Building

David Henriques, CMU and SQIG-Instituto de Telecomunicações

Statistical Model Checking for Markov Decision Processes

Statistical Model Checking (SMC) is a computationally very efficient verification technique based on selective system sampling. One well identified shortcoming of SMC is that, unlike probabilistic model checking, it cannot be applied to systems featuring nondeterminism, such as Markov Decision Processes (MDP). We address this limitation by developing an algorithm that uses multiple rounds of sampling and Reinforcement Learning to provably improve resolutions of nondeterminism with respect to satisfying a Bounded Linear Temporal Logic (BLTL) property. Extensive validation with both new and established benchmarks demonstrates that the approach scales very well in scenarios where symbolic algorithms fail to do so.


Room P3.10, Mathematics Building

Jean-Yves Béziau, UFRJ - Brazil

Cut-elimination, truth-functionality, compositionality and analyticity

Classical cut-elimination means that a cut-free proof depends only on subformulas. But we also may have cut-free proofs depending on more than subformulas. Moreover cut-elimination does not mean that we have truth-functionality or compositionality. In this talk we will explain these distinctions and give many examples.


Room P3.10, Mathematics Building

Luís Antunes, U Porto / SQIG-IT

Compression vs Entropy: an application to biological signal

Researchers and clinicians recognize that many unsolved medical problems are due to the application of conventional mathematics methods to describe biological complex systems. More recently, a different approach based on nonlinear dynamics, chaos and complexity has been considered, which recognizes irregularity, subjectivity and uncertainty as intrinsic and fundamental. Complexity is a property of every system that quantifies the amount of structured information. We investigate the use of complexity measures in biological signals, in particular will emphasise the usefulness of compression as a complexity measure. Entropy has been widely used in the area; however few studies consider compression as a measure of complexity. In this talk we will give strong evidence that compression should also be used as a useful measure of complexity in biological signals, in particular we used it with success in a fetal heart rate data set.


Room P3.10, Mathematics Building

Carlos Caleiro
Carlos Caleiro, IST-UTL / SQIG-IT

Classic-like cut-based tableau systems for finite-valued logics

A general procedure is presented for producing classic-like cut-based tableau systems for finite-valued logics. In such systems, cut is the only branching rule, and formulas are accompanied by signs acting as syntactic proxies for the two classical truth-values. The systems produced are guaranteed to be sound, complete and analytic, and they are also seen to polynomially simulate the truth-table method, thus extending known results about classical logic. Lukasiewicz's 3-valued logic is used throughout as a simple illustrative example. Joint work with M. Volpe and J. Marcos.


Room P3.10, Mathematics Building

On two-dimensional products of modal logics

Products of Kripke frames are natural relational structures for modelling the interaction between different modal operators, representing notions such as time, space, knowledge, actions. The product construction shows up in various disguises in many logical formalisms, such as algebras of relations in algebraic logic, finite variable fragments of classical, intuitionistic and modal predicate logics, temporal-epistemic logics, dynamic topological logics, modal and temporal description logic.

It is known that $n$-products of modal logics have in general a very complex behaviour for $n>2$. For example, every logic between $K\times K \times K$ and $S5\times S5\times S5$ is non finitely axiomatisable, and both its satisfiability and its finite-frame problem are undecidable. However, no such examples were known in the $n=2$ case. On the contrary, a big class of binary products of modal logics is known to be finitely axiomatisable, every $2$-product of two Horn axiomatisable logics is in this class.

In this talk I will present some recent contributions to the understanding of this construction. In [1] the first examples of two-dimensional products of finitely axiomatisable modal logics that are not finitely axiomatisable were presented. If a modal logic $L$ is finitely axiomatisable, then it is of course decidable whether a finite frame is a frame for $L$: one just has to check the finitely many axioms in it. If $L$ is not finitely axiomatisable, then this might not be the case. In [2], is shown that the finite frame problem for the modal product logic $K4.3\times S5$ is decidable. $K4.3\times S5$ is outside the scope of both the known finite axiomatisation results, and the non-finite axiomatisability results of [1]. So, it is not known whether $K4.3\times S5$ is finitely axiomatisable. We will discuss whether this result bring us any closer to either proving non-finite axiomatisability of $K4.3\times S5$, or finding an explicit, possibly infinite, axiomatisation of it.

  1. A. Kurucz and S. Marcelino: Non-finitely axiomatisable two-dimensional modal logics, Journal of Symbolic Logic, vol. 77 (2012).
  2. A. Kurucz and S. Marcelino: Finite frames for $K4.3\times S5$ are decidable, Advances in Modal Logic, Volume 9, College Publications (2012).