Reflection-based tactics are very useful in theorem proving. When doing reflection one defines a type of syntactic expressions that get interpreted in the domain of discourse. By allowing the interpretation function to be partial, one gets a more general method known as partial reflection. In this talk we show how partial reflection can be used to define a tactic for proving equalities in a field in such a way that it will also work in simpler algebraic structures, such as groups and rings.
This seminar focuses the development of a suitable theory for circuits. Although many models are based in circuits, they are usually seen as graphs, in a somewhat very informal manner. We believe that this can be the cause of many problems and that a suitable formalization is needed. Hence, we present a theory for continuous and discrete time circuits, and prove several results. We also restrict our attention to some special cases and apply this theory to prove basic results concerning Shannons General Purpose Analog Computer.
One compelling use for a multimodal logic is the situation in which agents are borne to bring together different views on the same matter. However, as what happens in any usual discussion, there might be opinions in disagreement and conflicting interests to be reconciled. If all agents begin on an equal footing, in the view of an impartial arbiter, there is no reason not to suppose that they should share a common language, thus flattening the many modalities in toto. Any such a situation gives rise to what is known as a discussive system. This talk will survey the work on discussive logics and their modal interpretations, since their origin in Jaskowski 1948, and show their relation to logics with society semantics. The metatheoretical notions of consistency and determinedness will be shown to be easily internalizable by such logics.
Bounded Arithmetic (Buss' $S_2$ and its subsystems) is significant for at least two reasons:
Its close connection to computational complexity, and
The fact that the exponentiation function cannot be defined.
The latter reason is significant because many of the theorems of combinatorics and basic number theory can be proven in a straightforward manner if we add an axiom to $S_2$ stating that the exponentiation function is total. So it is an interesting question to consider what theorems can be proven in just $S_2$ , without exponentiation. I have focused on Ramsey theory and the probabilistic methods, in the context o f$S_2$ and its subsystems.
In order to deal with probabilistic methods in the absence of exponentiation, I will use the weak pigeonhole principle to simulate the counting argument. I use the fact that we can mimic the counting argument by a bounded formula. In effect, the non-constructive counting argument, has some constructive content. Many of these theorems have some bound in the statement of the theorem. In order to prove the theorems in $S_2$, the bound must often be made less tight; furthermore, different subsystems of $S_2$ can prove different bounds. An interesting phenomenon arises: A trade-off between the strength of the bound in a theorem versus the strength of the subsystem used to prove the theorem.
To this point, I have considered what can be proven in $S_2$ and its subsystems. We would of course also want to know what cannot be proven. In some cases such independence results would solve the hard open question of the non-collapse of the subsystems of $S_2$. I consider a seemingly more feasible alternative: Finding reversals. Often the proof of a theorem will rely heavily on some principle. Borrowing from the terminology of Reverse Mathematics, I use the term reversal to refer to the claim that the theorem in fact implies this principle used to prove it. I will discuss two reversals.
Modal extensions of the Logics of Formal Inconsistency (LFIs) warrant new appropriate approaches to certain questions involving the Paradox of Knowability. We show that any propositional logic $L$ which extends the classical implicative fragment ImPC and which is sound and complete with respect to dyadic semantics can be, by its turn, extended to a modal logic $KL$ $\langle j,k,m,n\rangle$ (through the axiom schemes (K), $G$ $\langle j,k,m,n\rangle$ and the Necessitation) in such a way that $KL$ $\langle j,k,m,n\rangle$ is constructively characterizable by Kripke structures, independently of any properties of negation. In particular, families of modal LFIs can in this way be defined, with application to questions of knowability.
–
Room P3.10, Mathematics Building
Maxime Crochemore, Institut Gaspard-Monge, U Marne-la-Vallée, France
Classical algorithms for computing the similarity between two sequences use a dynamic programming matrix, and compares two strings of size $n$ in $O(n^2)$ time. We address the challenge of computing the similarity of two strings in sub-quadratic time, for metrics which use a scoring matrix of unrestricted weights. Our algorithm applies to both local and global similarity computations. The speed-up is achieved by using the Lempel-Ziv parsing of both strings. It leads to an $O(hn^2/\log n)$ algorithm for an input on a constant-size alphabet, where $h\lt 1$ is the entropy of strings.
In the tree-like representation of time, a relative closeness relation $C$ on the set of histories can be defined by $ C(h,h',h'')\equiv h\cap h'\supset h\cap h''$. We can read $ C(h,h',h'')$ as follows: the history $h$ is closer to $h'$ than to $h''$. In the topological approach, time can be presented as a pair $\langle H, O\rangle$, in which $H$ is a set of histories (viewed as primitive entities) and $O\subseteq \wp H$ is a set of neighborhoods on $H$. Suitable assumptions on $O$ allow us to view this set as the set of moments in a tree. In these structures, $ C(h,h',h'')$ can be defined as follows: $\exists \alpha\in O: h,h´\in \alpha$ and $h''\not\in\alpha$. I will present a further representation of branching-time in which the primitive notions are those of history and of relative closeness relation, so that that time can be viewed as a first-order structure $\langle H, O\rangle$. It will be shown that, in this setting, neighborhoods can be defined and that the conditions under which neighborhoods can be viewed as moments in time, are first-order conditions on $C$.
We will discuss several continuous time models of computation for which a precise characterization of the computational power compared to classical discrete models can be obtained. In particular, we will talk about piecewise constant derivative systems and (variations of) R-recursive functions. This talk will be based on personal results and an ongoing work with Emmanuel Hainry.
Discrete event and hybrid systems provide methodologies for modeling of robotic tasks prone to analysis and synthesis based on (qualitative and quantitative) performance evaluation. Examples of applications to model abstractions, multi-robot task planning, robot population distribution control, and stochastic plan optimization by reinforcement learning will illustrate work direction currently pursued at the Intelligent Systems Lab of ISR/IST on this topic.
We generalize the ordinary concept of a multiple-valued matrix by introducing non-deterministic matrices (Nmatrices), in which non-deterministic computations of truth-values are allowed. It is shown that some important logics for reasoning under uncertainty (in particular: many paraconsistent logics) can be characterized by finite Nmatrices, although they have only infinite characteristic ordinary (deterministic) matrices. It is further shown that there is no $n$ such that the use of finite Nmatrices for characterizing logics can be reduced to those with less than $n$ truth-values. A generalized compactness theorem that applies to all finite Nmatrices is then proved. Finally, a strong connection is established between the admissibility of the cut rule in canonical Gentzen-type propositional systems, non-triviality of such systems, and the existence of sound and complete non-deterministic two-valued semantics for them. This connection is used for providing a complete solution for the old Tonk problem of Prior.
Our purpose is to show a new way of algebraizing logics, called possible-translations algebraic semantics. This new method is able to algebraize some logics known not to be algebraizable (neither in the classical Lindenbaum-Tarski sense, nor even in the extended Blok- Pigozzi sense), as it is the case of certain paraconsistent logics. The method is based on the idea of extending the finitary character of Blok-Pigozzi algebraization by means of possible-translations semantics. We show some concrete examples and discuss related problems.
–
Room P3.10, Mathematics Building
Paulo Veríssimo, Faculdade de Ciências, Universidade de Lisboa
The way we do science is changing dramatically. The future will show that we cannot ignore the confluence of different sciences, and of the latter with technology and human factors as we know them today. Otherwise, we will probably compromise the future accuracy and/or the relevance of the results of classical sciences, and thus, of their ultimate goal: acquiring knowledge. Not knowing how to take all these challenges into account should no longer be an excuse for not trying. We present some modest conjectures and some practical evidence from research in Informatics, a TechnoScience where the confluence of Computing Science, Computer Technology, and Humans, has incubated a natural field for thought experiments along the lines just proposed. In our talk, we digress through themes as diverse as causal and temporal ordering, feedback, ambient intelligence, sentient objects, observation error and coverage of hypothesis, computer crashes and Byzantine failures, determinism and probabilities, uncertainty versus predictability, pervasive computing and pervasive hacking, malicious intelligence.
Cut elimination for the modal sequent calculi developed by P. Mateus, A. Sernadas, C. Sernadas and L. Viganò is analysed. General results are presented for eliminating cuts over labelled formulae and omega assertions. Sufficient conditions are given for removing cuts over truth-value assertions. Comparison with cut elimination in the context of non-labelled approaches to modal logic is provided. This talk reports ongoing joint work with P. Mateus, C. Sernadas and L. Viganò.
This work in progress paper presents a methodology for reasoning about the computational complexity of functional programs, which are extracted from proofs. We suggest a first order arithmetic Ta which is a syntactic restriction of Peano arithmetic. We establish that the set of functions which is provably total in Ta, is exactly the set of polynomial time functions. Compared to others feasible arithmetics, Ta is surprisingly simpler. The main feature of Ta concerns the treatment of the quantification. The range of quantifiers is restricted to the set of actual terms which is the set of constructor terms with variables. The second feature concerns a restriction of the inductive formulas.
Two distinct views of cryptography have been developed over the years, in two mostly separate communities, Formal Methods and Computational/Probabilistic Communities. The relation between these two approaches was previously explored in a paper by Abadi and Rogaway in 2000 where they proved the computational soundness of formal encryption with respect to a special type of cryptosystems. We present expansions of the Abadi-Rogaway logic and establish their soundness and completeness for a variety of interpretations. Two such interpretations are discussed in detail: a purely probabilistic one that interprets formal expressions in the One-Time Pad cryptosystem, and another one in the so-called type-2 (which-key revealing) cryptosystems based on computational complexity.
In an attempt to devise a general notion of model for spatial logic, I have been led to consider transition systems with an additional so-called spatial structure on the states, with both the transition and the spatial structures described in coalgebraic terms. The corresponding notion of bisimulation takes into account both structures and bisimilarity is equivalent to logical equivalence, thus extending Hennessy-Milner's result to the present framework. Transition systems with spatial structure can be seen as a noninterleaving model of concurrency, as there exist translations to and from a certain category of Petri nets, which constitute a pair of adjoint functors. The fragment of spatial logic considered will be related to coalgebraic modal logics, and it will be shown how the spatial operators are derived operators in a more primitive coalgebraic modal logic.
The abstract theory of algebraizable logics is a very poweful tool for studying a given target logic using techniques of universal algebra. However, this theory has some applicability limitations. One important example is the class of non-truth-functional logics, in particular the paraconsistent systems of daCosta. In this talk, we overview the main concepts and results of the theory of algebraizable logics and present an alternative characterization of these notions in terms of maps between the target logic and unsorted equational logic. Using this characterization, we then propose an extension of the theory where the role played by unsorted equational logic can be replaced by any other base logic that satisfies a few reasonable requirements. We illustrate the approach by exploring the non-truth-functional paraconsistent system C1 of daCosta using two-sorted equational logic as a base.
The advancements made in the last decade in algorithms for Boolean Satisfiability (SAT) motivated the application of SAT solvers in different areas of Computer Science and Engineering, including theorem proving, artificial intelligence, verification of reactive systems, design automation, cryptography, among others. In the area of reactive systems verification, the most well-known utilization of SAT is in model checking. The objective of this talk is to survey the utilization of SAT models and algorithms in model checking. The talk will emphasize the utilization of SAT models and algorithms in two specific contexts: bounded and unbounded model checking.
Stable Models is a very well-known and frequently used semantics for logic programs. Furthermore, it is the basis for Answer-Set Programming (ASP) and its implementations. However, it suffers from a major problem, in that odd loops over default negation, such as $p \leftarrow \neg p$, can cause programs to have no semantics. This is because of impossibility of lending inconsistent support to conclusions. Another consequence is that Stable Models semantics is not relevant (in the sense that the truth of a literal may be found through a call-graph top-down derivation procedure), is not cummulative (which prevents it to enjoy tabling or memoizing), and is not subject to partial evaluation. This talk presents a new logic programming semantics, which I have dubbed Revised Stable Models. It includes Stable Models as a special case, and is relevant, cummulative, and partially evaluatable. The new semantics addresses the inconsistenty problem by employing a limited form of Reductio ad Absurdum. Moreover, two program transformations into normal logic programs are defined: one for enabling top down evaluation of literals; another for producing a normal program whose Stable Models are exactly the Revised Stable Models. This enables immediate implementation of the new semantics. A paper with the proposal will be sent beforehand to those so wishing.
In most branches of mathematics, total functions are the exception rather than the rule. But representing partiality in formal systems like first-order logic (FOL) or type theory is quite tricky, since by their own nature these systems deal only with total objects. With this in mind, Wiedijk and Zwanenburg introduced an extension of FOL where functions are allowed to be partial and showed that their system was equivalent to FOL with so-called “domain conditions”. In this talk we look at the model theory for their system, which has some unexpected differences from the model theory for FOL, and show that the equivalence they showed also holds at the semantic level. As a consequence, we obtain completess of this system.
Several security protocols make use of the algebraic properties of cryptographic operators, e.g. associativity of XOR or commutativity of exponents in modular exponentiation. Handling algebraic equations in the symbolic analysis of security protocols is however difficult as it requires extending the underlying unification algorithm as well as the protocol and intruder model. A number of approaches have been proposed, but they are complex and specialized to particular equational theories. In this talk, I will survey these approaches and present a general, uniform method to handle algebraic properties in symbolic protocol analysis, based on a restriction on the depth of message terms. As a case study, we have integrated our method into OFMC, a state-of-the-art protocol analysis tool based on symbolic techniques, thereby extending the scope of OFMC to encompass a large number of practically relevant protocols.
StAC (Structured Activity Compensation) is a business process modelling language and distinctive feature of this language is its support for compensation. A compensation is an action taken to recover from error or cope with a change of plan, especially when rollback of a process is not possible. StAC is similar to a process algebraic language such as Hoare's CSP or Milner's CCS but has additional operators dealing with compensation and with exception handling. This talk presents the StAC language and discusses its operational semantics.
In this talk, I shall introduce a new modal logic that is suitable to model complex systems evolving over time in a modular fashion. This logic may be regarded as the result of extending propositional linear temporal logic by a second dimension that allows us to zoom into states and thereby to further refine the specification of events associated with these states. In this sense, our logic may be described as an extended temporal logic that combines features from both point-based temporal logics and modal interval logics. From a more abstract point of view, our logic is best described as a modal logic based on frames that are ordered trees.
Fibring is recognized as one of the main mechanisms for combining logics. General results for preservation of soundness and completeness have been established. However, although quite powerful, fibring suffers from an anomaly usually known as the collapsing problem. Indeed, ever since the first accounts of fibring, it could be noticed that fibring of classical with intuitionistic logic would collapse into just classical logic. In this work, we propose a solution to solve this problem by adopting a more general notion of fibred semantics using cryptomorphisms. In particular, we show that the novel notion of cryptofibring encompasses the original definition of fibred model, while admiting also (initial) amalgamated models that can be used to show that the above mentioned collapse is no longer present. We also show that the general results for preservation of soundness and completeness by fibring can be extended to this wider setting. Conservativeness issues are also addressed. Joint work with Carlos Caleiro.
We consider fibrations in the 2-category Fib of fibrations as a setup to model a predicate logic on a polymorphic type theory. Using the lifting/factorisation of adjoints in Fib from [1], we give a structural decomposition of fibrations therein, which we apply to describe the categorical structure necessary to model the above logic. Similarly, we obtain a lifting of simple products for composite fibrations, from which we deduce that the above logic yields a fibration $(p;q) : t \to b$ between $\lambda$2-fibrations; the structure in $t$ expresses the so-called logical relations for polymorphic lambda calculus. We complete the set-up adding structural equality in order to express relational parametricity. We show how such fibrations in Fib arise by externalisation of internal ones, and present some relevant constructions.
References
C. Hermida, Some properties of Fib as a fibred 2-category. Journal of Pure and Applied Algebra, 134(1):83-109, 1999.
C. Hermida, Fibrations for Abstract Multicategories. In Proceedings of Workshop on Categorical Structures for Descent and Galois Theory, Hopf Algebras and Semiabelian Categories, Fields Institute, Toronto, September 23-28, 2002.
We develop an abstract categorical setting for a probabilistic situation calculus to reason about randomly reactive automata and show that we can set up an institution. We also show that, in the case in which we consider multiple random sources, we have an enlargement of the concept of institution using fibrations over paracategories.
In this talk, we will present a procedure that shows that every Turing machine can be simulated by an analytic map, in an error-robust manner. We also show that these maps can be explicitly obtained. We end the session with some considerations about the possibility of using noise to perform simulations of non-deterministic Turing machines.
The traditional approaches to program analysis provides semantics based compile-time techniques for statically predicting safe and computable approximations to programs behaviours. We present a quantitative approach to program analysis which provides approximate answers (in a way similar to the classical program analysis) together with some numerical estimate of the approximation introduced by the analysis. Our source for numerical information comes from the use of a probabilistic semantics and in particular of vector space and linear algebraic structures for modelling the computational domain. We then show the usefulness of such an approach for computer security by presenting some applications to the problem of confidentiality, and in particular to the quantitative analysis of systems' confinement.
Guessing, or dictionary, attacks arise when an intruder exploits the fact that certain data like passwords may have low entropy, that is, stem from a small set of values. In the case of off-line guessing, in particular, the intruder may employ guessed values to analyze the messages he has observed. In this talk, I will present a semantic-based formalization of off-line guessing by giving a deduction system that is uniform and general in that it is independent of the overall protocol model and of the details of the considered intruder model, i.e. cryptographic primitives, algebraic properties, and intruder capabilities.