Activity-oriented computing promotes the use of models of user activities by software systems. In this talk I will introduce this topic and explore some of its applications such as:
Self-configuration and dynamic adaptation. For this application, activity models capture the needs and preferences of users for each activity; for instance, using domain ontology to capture needs and utility-theoretic frameworks to capture preferences. By interpreting such models at run time, software systems can automatically choose which features to activate, how to configure those features, choose among alternative computation strategies depending on the available resources, etc.
User mobility in ubiquitous computing. A special case of self-configuration and adaptation, where users move to different locations and want to access their activities with whatever computing capabilities are embedded in the environment (walls, tables, etc.) Challenges include dealing with heterogeneity of devices and software, scoping service discovery, providing a security and privacy framework, and designing interfaces for users to manage their activities.
In this talk we address the curious phenomenon by which some pairs of different logics defined over the same signature (e.g. the paraconsistent logic $P_1$ and the paracomplete logic $I_1$) admit a Blok-Pigozzi algebraization within the same quasi-variety. An apparent paradox then arises because $I_1$ and $P_1$ admit identical algebraizators, and so they cannot be algebraizable within the same quasi-variety. We will show that the paradox vanishes if we take into account that different presentations of the given logics are used in each case. Finally, we present an isomorphism between the category of algebraizable logics and the category of deductivizable quasi-varieties, which sheds light into the above situation. Our categories are based upon natural notions of morphisms, in contrast to the unusual and restricted notion of morphisms used by Jánossy, Kurucz, and Eiben, and so our result can be considered an improvement of the latter.
We develop a notion of spatial-behavioral typing particularly suitable to discipline interactions and resource usage in concurrent programs. Our type structure reflects a resource aware model of behavior, where a parallel composition type operator expresses resource independence, a sequential composition type operator expresses implicit synchronization, and a modal type operator expresses resource ownership. Soundness of our type system is established using a logical relations technique, building on a interpretation of types as properties expressible in a spatial logic.
The talk starts from a general background to illustrate the usage of mobile processes as an analysis tool for security issues in ubiquitous computing. In wide area distributed systems it is now common for mobile code to be transferred from one domain to another; the receiving host may initialise parameters and then execute the code in its local environment. We show that our fine-grained typing system for mobile processes can be used to control the effect of such migrating code on local environments. Processes may be assigned different interface types depending on their intended use. The advanced channel dependent types facilitate the management of access rights and provide host protection from potentially malicious behaviour. I shall also describe various applications of this type-based approach, including Secure Information Flow Analysis of Mobile Code, an analysis of correctness of optimised Distributed Java Code and safety guarantee of business protocols written in Web Services Languages.
In many network applications and services, agents that share no secure channel in advance may still wish to communicate securely with each other. In such settings, one often settles for achieving security goals weaker than authentication, such as sender invariance. Informally, sender invariance means that all messages that seem to come from the same source actually do, where the source can perhaps only be identified by a pseudonym. This implies, in particular, that the relevant parts of messages cannot be modified by an intruder. In this talk, I will discuss the first formal definition of sender invariance as well as a stronger security goal that we call strong sender invariance. Both kinds of sender invariance are closely related to, and entailed by, weak authentication, the primary difference being that sender invariance is designed for the context where agents can only be identified pseudonymously. In addition to clarifying how sender invariance and authentication are related, this result shows how a broad class of automated tools can be used for the analysis of sender invariance protocols. As a case study, I will describe the analysis of two sender invariance protocols using the OFMC back-end of the AVISPA Tool. I will also consider the formalization of receiver invariance and other security goals based on pseudonyms.
–
Conference Room, Instituto de Telecomunicações, IST
João Barros, Faculdade de Ciências, Universidade do Porto
Recent theoretical and practical work has shown that novel physical layer security techniques have the potential to significantly strengthen the security of wireless networks. In the first part of this talk we will briefly review the fundamentals in information-theoretic security and discuss our most recent results. Formulating the problem as one in which two legitimate partners communicate over a quasistatic fading channel and an eavesdropper observes their transmissions through another independent quasistatic fading channel, we define the secrecy capacity in terms of outage probability and provide a complete characterization of the maximum transmission rate at which the eavesdropper is unable to decode any information. In sharp contrast with known results for Gaussian wiretap channels (without feedback), our results show that in the presence of fading information-theoretic security is achievable even when the eavesdropper has a better average signal-to-noise ratio (SNR) than the legitimate receiver. In the second part of the talk, we will propose a practical security scheme by which two terminals (say Alice and Bob) are able to exploit the randomness of wireless fading channels to exchange data in an information-theoretically secure way. To ensure that a potential eavesdropper (say Eve) is unable to decode any useful information, Alice sends useful symbols to Bob only when the instantaneous secrecy capacity is strictly positive. In the remaining time, a specially designed class of LDPC codes is used for reconciliation, thus allowing the extraction of a secret key, which can be distilled using privacy amplification. We believe this opportunistic approach can be used effectively as a physical layer complement to existing cryptographic protocols. Joint work with Miguel Rodrigues (Princeton/DCC-FCUP), Matthieu Bloch and Steve McLaughlin (Georgia Tech).
Note the exceptional location. The seminar will also be available through live broadcast at http://www.it.pt/auto_temp_web_page_preview.asp?id=174 using Internet Explorer.
–
Room P4.35, Mathematics Building
João Sobrinho, Instituto de Telecomunicações, Instituto Superior Técnico
Abstract algebra has a unifying and illuminating presence in many branches of computer science and electrical engineering, from logic and language theory to coding and criptography. In this talk, I will present an algebraic theory of dynamic network routing, with the goal of establishing fundamental results common to the various routing paradigms currently found in the Internet, ranging from intra-domain quality-of-service routing to inter-domain policy-based routing. The results concern the existence, uniqueness, and optimality of network potentials, together with the convergence of routing protocols to those network potentials. At heart, the algebraic constructs relevant to routing differ from semi-rings and dioides in that the distributive law is not required. This law is associated with optimality: its absence in our algebraic framework reflects Internet's inter-domain routing emphasis on global connectivity rather than on global optimality. Distance-vector and link-state protocols are the most common routing protocols in use in the Internet, the first in both inter- and intra-domain routing, the latter in intra-domain routing. I will show that distance-vector protocols and some of its variations operate correctly under very mild algebraic conditions whereas link-state protocols require the distributive law even for basic correctness. A great many applications arise as particular instances of the algebraic framework, and these will be shown throughout the talk.
Note the exceptional time and room
–
Room P3.10, Mathematics Building
João Sobrinho, Instituto de Telecomunicações, Instituto Superior Técnico
Abstract algebra has a unifying and illuminating presence in many branches of computer science and electrical engineering, from logic and language theory to coding and criptography. In this talk, I will present an algebraic theory of dynamic network routing, with the goal of establishing fundamental results common to the various routing paradigms currently found in the Internet, ranging from intra-domain quality-of-service routing to inter-domain policy-based routing. The results concern the existence, uniqueness, and optimality of network potentials, together with the convergence of routing protocols to those network potentials. At heart, the algebraic constructs relevant to routing differ from semi-rings and dioides in that the distributive law is not required. This law is associated with optimality: its absence in our algebraic framework reflects Internet's inter-domain routing emphasis on global connectivity rather than on global optimality. Distance-vector and link-state protocols are the most common routing protocols in use in the Internet, the first in both inter- and intra-domain routing, the latter in intra-domain routing. I will show that distance-vector protocols and some of its variations operate correctly under very mild algebraic conditions whereas link-state protocols require the distributive law even for basic correctness. A great many applications arise as particular instances of the algebraic framework, and these will be shown throughout the talk.
The exogenous semantic approach for enriching a given base logic was introduced by Mateus e Sernadas in order to obtain an exogenous quantum logic that extends classical propositional logic (CPL). In this enrichment we can distinguish two important intermediate steps, globalization and probabilization. In this work we propose an algebraic study of Exogenous Global Propositional Logic (EGPL) and Exogenous Probabilistic Propositional Logic (EPPL). We start by introducing a many-sorted approach to algebraizability, suitable for reasoning about logics that, like EGPL and EPPL, have rich (many-sorted) languages. We show that EGPL over a base logic L, EGPL(L), is always algebraizable. Moreover, when L is also algebraizable, we can recover the algebraic counterpart of L using behavioral reasoning. We also show that EPPL(CPL) is algebraizable and present an equivalent algebraic semantics for it. Joint work with Carlos Caleiro.
For a symmetric monoidal category $\mathbf{C}$, and a given collection of objects $\mathbf{W}$, we provide a construction of $\mathbf{C}\mathbf{\{W\}}$, which results from $\mathbf{C}$ by adding monoidal indeterminates $x{\{O\}}:{I}\to{O}$ for every object $O$ of $\mathbf{W}$. As a special case, we solve the problem of explicitly adding an indeterminate to a single object $O$ in $\mathbf{C}$. Furthermore, if $\mathbf{C}$ is closed, so is $\mathbf{C}\mathbf{\{W\}}$. By suitably restricting the morphisms in $\mathbf{C}\mathbf{\{W\}}$, we obtain the categories of possible worlds of Oles and Tennent, thereby providing a universal characterisation for these constructions.
Most of the techniques of combining logics are devoted to produce new logics from given ones. This is the case, for instance, of fibring and fusion. The other side of the coin, that is, how to decompose a logic into others ('splitting' a logic, as opposite to 'splicing' logics) has been less explored in the literature. Possible-Translations Semantics (PTS's), introduced by W. Carnielli in the 90's, are one of the main representatives of this approach, and several applications of PTS's were obtained, including a generalization of Blok-Pigozzi's methods for algebraizing logics. In this talk we propose a wide notion of morphism between propositional signatures by using multifunctions, and prove that the category of signatures and such morphisms has products. This category is used as a basis for a generalization of PTS's called Possible-Translations Coverings (PTC's). The possibility of obtaining algebraizability of logics via PTC's will be also discussed.
Usually, when deriving algebraizability conditions for propositional Hilbert-style deductive systems, semantical considerations are used explicitly. By formalizing the metalogics ot these proofs it is possible to restrict them onto the formula algebra only. The advantages of this approach include more structured proofs, correlating with the ones for Gentzen-style systems, and syntactical clarifications and ramifications of previously known results. To do this, for a given Hilbert-style deductive system $S$, we define an abstract Gentzen-style system which formalizes some aspects of metalogical reasonings about $S$, and then, by using this Gentzen system, derive the protoalgebraicity condition.
Probabilistic programming refers to programming in which the probabilities of the values of variables are of interest. For example, if we know the probability distribution from which the inputs are drawn, we may calculate the probability distribution of the outputs. We may introduce a programming notation whose result is known only probabilistically. A formalism for probabilistic programming was introduced by Kozen, and further developed by Morgan, McIver, Seidel and Sanders. Their work is based on the predicate transformer semantics of programs; it generalizes the idea of predicate transformer from a function that produces a boolean result to a function that produces a probability result. The work of Morgan et al. is particularly concerned with the interaction between probabilistic choice and nondeterministic choice, which is required for refinement.
Boolean algebra is simpler than number algebra, with applications in programming, circuit design, law, specifications, mathematical proof, and reasoning in any domain. So why is number algebra taught in primary school and used routinely by scientists, engineers, economists, and the general public, while boolean algebra is not taught until university, and not routinely used by anyone? A large part of the answer may be in the terminology and symbols used, and in the explanations of boolean algebra found in textbooks. This paper points out some of the problems delaying the acceptance and use of boolean algebra, and suggests some solutions.
Given a specific language $L$ and the corresponding set of formulas $F$, we consider the class of all sets of formulas (the power set of $F$). Each such a set can be viewed as a logic taken as a set of tautologies. Relations such as inclusion can be used to compare these logics and this leads to a structure that we call the world of possible logics of type $L$. Elements of such a structure can be considered in fact as bivalutions (taking the characteristic functions) and relations between elements as accessibility relations. This leads to the concept of UKS: Universal Kripke Structure. We will show how such general concepts can be used to develop universal logic, i.e., a general theory of logics, and applied to develop specific constructions such as a logic of imagination.
In this talk I will discuss a model of long-running transactions (LRT) within the framework of the CSP process algebra. Standard transactions deal with faults by preserving an all or nothing atomicity property. In the context of LRT, which involve the coordination and interaction of activities executed by multiple partners, atomicity is not preserved. In LRT, fault handling can be achieved by defining compensating actions for those actions that cannot be undone automatically. cCSP provides constructs for transaction coordination with support for compensation.
We develop a logic which is a variant of continuous logics introduced by Chang and Keisler in 1966. We show that it is a common generalization of classical first order logic as well as Henson's logic for Banach spaces, and many good properties of first order logic (such as compactness) carry over. Continuous logic can axiomatize natural classes arising in functional analysis, dynamics, the theory of representations of locally compact groups, and is a suitable and convenient framework for model-theoretic study of classes of metric spaces. While classical model theory is applied mostly to algebraic structures, our framework pushes out the boundary in which logic can be used to a much broader context of a metric space equipped with continuous extra-structure.
We consider a simple language of processes and an expressive language of formulas. The former is the non-deterministic choice free fragment of Milner's CCS, while the latter is inspired on the Spatial Logic of Caires and Cardelli and on the Process Logic of Milner. We adopt a "propositions as types" approach, where types are formulas, denotational semantics are established through structural congruence and labelled transitions, and subtyping is interpreted as a kind of logical entailment. A type system is defined as a deductive system and some results are proved about it, namely weak consistency and completeness. Finally, an application of the system is developed, thus expressing its potential.
We introduce two modal natural deduction systems that are suitable to represent and reason about transformations of quantum registers in an abstract, qualitative, way. Quantum registers represent quantum systems, and can be viewed as the structure of quantum data for quantum operations. Our systems provide a modal framework for reasoning about operations on quantum registers (unitary transformations and measurements), in terms of possible worlds (as abstractions of quantum registers) and accessibility relations between these worlds. We give a Kripke-style semantics that formally describes quantum register transformations and prove the soundness and completeness of our systems with respect to this semantics. Joint work with Andrea Masini and Margherita Zorzi.
We are interested in applying formal verification techniques to prove security of quantum systems, in order to take advantage of the theories, techniques and tools of classical security analysis. In particular we focus on the process calculus approach. We review previous work on process calculus for classical and quantum systems, and identify a mismatch between typical security statements in the classical process calculus world and the quantum information-theoretic world. To bridge this gap, we define a new notion of information-theoretic indistinguishability in process calculus, and prove that is implied by an existing notion of probabilistic indistinguishability. Joint work with Pedro Adão and Paulo Mateus.