Abstracts of talks at the La Roche IFIP WG1.3 meeting
Abstracts of talks at the La Roche en Ardenne IFIP WG1.3 meeting (June 3-6, 2006)
=== Saturday, June 3rd
- Andrzej Tarlecki
(Institute of Informatics, Warsaw University and Institute of
Computer Science, Polish Academy of Sciences, Warsaw, Poland)
Towards (institutional) foundations for heterogeneous specifications
The overall aim of this work is to develop semantic foundations for
using multiple logical systems in a single specification and software
development task.
In this talk, I recall the basic technical notions related to
modelling logical systems as institution, and tempt the audience with
exciting possibilities institutions offer for a whole range of
applications and foundational developments. This provides a basis to
recall the standard view of structured specifications in an arbitrary
institution, where complex specifications may be built gradually from
simpler ones by applying pre-defined, institution-independent
specification-building operations, and of a systematic software
development process, proceeding from an initial requirements
specification to implementation by a number of gradual implementation
steps. In particular, the latter allows for constructor
implementations, where one specification is implemented by another one
via a construction mapping models of the latter to the models of the
former. Now, the goal of this work is to lift these concepts and
ideas to heterogeneous specifications and developments, which may
involve various logical systems to deal with various system components
and various development stages.
An obvious observation is that to make this meaningful, the
institutions to be involved must be linked with each other. A number
of notions of a mapping between institutions have been proposed, each
with its own intuition and technically different formalisation.
Fortunately, it turns out that in a certain way the basic notions are
mutually definable (by replacing maps of one kind by spans of maps of
another kind). Thus, even though technical consequences are different,
in principle one may choose any kind of maps to related institutions,
yielding a category of institutions with maps of interest as a basic
framework in which the issues of heterogeneity may be discussed. A
heterogeneous logical environment in which one works is then simply
captured as a diagram in this category.
Once such a heterogeneous logical environment is given, two basic ways
to move specifications between institutions may be studied. One
corresponds to translation of a specification along a signature
morphism within an institution, and works best when used as a
translation of a specification along an institution comorphism. The
other corresponds to hiding within an institution "against" a
signature morphism, and works best when used as a "hiding" along an
institution morphism. Together with the standard intra-institutional
specification building operations, these may be used to build
heterogeneous specifications that use various institutions to build
specification fragments, but overall are aimed at specifying a class
of models in one particular institution. I call such heterogeneous
specifications "focused". They may be used in the software
specification and development process very much as the usual
structured specifications built within a single institution.
An additional advantage a heterogeneous logical environment offers
here is that the development process itself may migrate between
various logical systems, using the model part of institution
(co)morphisms as additional constructions on models. This applies both
to the development of the system as a whole, as well as to
developments of its individual modules, as arising from CASL-style
architectural specifications.
In addition to these possibilities, I discuss distributed
heterogeneous specifications, formally defined as diagrams of
specifications built over various institutions, linked by morphisms
that in essence consist of two parts: an institution (co)morphism,
used to move from one institution to another, and a signature morphism
within an institution, used to move from one signature to another.
Such a distributed specification captures a multiple-view
specification of a system: the individual specifications in the
diagram capture individual views, and the morphisms provide the
required links between the various views of the same system. A notion
of a "distributed" model for such specifications is discussed, leading
to the concepts of consistency and refinements for such distributed
heterogeneous specifications.
This work is funded in part by the European IST FET programme under
the IST-2005-016004 SENSORIA project.
======================================================================
------------------------------------------------------
-
Hans-Joerg Kreowski (U. Bremen, Germany)
Short introduction to the Collaborative Research Centre 637
Autonomous cooperating logistic processes - A paradigm shift and its
limitations
(slides)
Collaborative Research Centres (CRC) are long-term university research
centres for interdisciplinary research funded by the German Research
Foundation (DFG). A CRC can run up to 12 years and is funded by about
1 Mill. Euro per year. The talk surveys the CRC 637 on Autonomous
Cooperating Logistic Processes that started in 2004 at the University
of Bremen and involves researchers from computer science, economics,
information technology, mathematics, and production engineering. The
objective of the CRC 637 is a systematic and broad investigation and
application of autonomy as a new control paradigm for logistic
processes. At the end of the talk, the particular role of the
Theoretical Computer Science Group within the CRC is sketched.
Further information in my
WADT talk
and in this
paper :
Karsten Hölscher, Hans-Jörg Kreowski, Sabine Kuske: Autonomous Units and Their Semantics - the Sequential Case. In Proc. Third Intl. Conference on Graph Transformation (ICGT 2006), Lecture Notes in Computer Science. Springer, 2006. To appear.
------------------------------------------------------
- Dominique Duval (U. Grenoble, France)
The category of diagrammatic logics
(slides)
Dominique Duval, partly with Christian Lair
Diagrammatic Logic provides a new algebraic point of view about logic, that seems well suited for studying computational effects in programming languages.
Some aspects of theoretical computer science require the collaboration
of several logics. For instance, the computational effects in a
language can be described with the help of two distinct related
logics, one for the denotational semantics (the models) and the other
one for the inference system of the language: this is the case with
the treatment of exceptions [Duval-Reynaud 2005]. The collaboration of
several logics can be expressed thanks to morphisms in a relevant
category of logics.
Diagrammatic logics generalize Ehresmann's sketches in a rather simple
way, by abstracting one step higher [Duval-Lair 2002, Duval 2003]. A
diagrammatic logic L is defined from a morphism of projective sketches
p. The specifications in the logic L are the realizations of the
source of p and the theories in the logic L are the realizations of
the target of p. The notion of models is defined thanks to the
adjunction associated to p. The adjunction associated to p also yields
the deduction process of L, as soon as p satisfies some assumptions:
such a p is called a propagator. There is a straightforward notion of
morphism between propagators, that gives rise to the category of
Diagrammatic Logics.
See also the WADT talk
"About raising and handling exceptions"
by Dominique Duval and Jean-Claude Reynaud
(slides
and
paper)
------------------------------------------------------
=== Sunday, June 4th
- Mark Ryan (U. Birmingham, UK)
Modelling and verifying electronic voting protocols in the applied pi calculus
(slides)
We report on some recent work to formally specify and verify
electronic voting protocols. In particular, we use the formalism of
the applied pi calculus: the applied pi calculus is a formal language
similar to the pi calculus but with useful extensions for modelling
cryptographic protocols. We model several important properties, namely
fairness, eligibility, privacy, receipt-freeness and coercion-
resistance. Verification of these properties is illustrated on two
cases studies and has been partially automated using the Blanchet's
ProVerif tool.
one paper and
another paper
------------------------------------------------------
- Pascale Le Gall (U. Evry, France)
Conformance testing for input/output symbolic transition systems
We propose an approach to test whether a system conforms to its
specification given in terms of an Input/Output Symbolic Transition System
(IOSTS). IOSTSs use data types to enrich transitions with data-based
messages and guards depending on state variables. We use symbolic
execution techniques both to extract IOSTS behaviours to be tested in the
role of test purposes and to ground an algorithm of test case generation.
Thus, contrarily to some already existing approaches, our test purposes
are directly expressed as symbolic execution paths of the specification.
They are finite symbolic substrees of its symbolic execution. Finally, we
give coverage criteria and demonstrate our approach on a running example.
Ref : Christophe Gaston, Pascale Le Gall, Nicolas Rapin, Assia Touil.
Symbolic Execution Techniques for Test Purpose Definition. 18th IFIP
TC6:WG6.1 International Conference, TestCom 2006, New York, NY, USA, May
2006. pp 1-18, LNCS 3964
------------------------------------------------------
-
Laure Petrucci (U. Paris-Nord, France)
home page
Modular Analysis of Petri Nets
(slides)
The systems to model are nowadays very large. Their specification is
often decomposed into several steps. This leads to modularly or
incrementally designed models. Petri nets analysis is generally
achieved via state space analysis, which is often impossible to perform
due to the so-called state space explosion problem. Several
methods allow to reduce the occurrence graph size, e.g. using partial
orders, symmetries, ... Here, we focus on techniques which take
advantage of the modular design of the system, and hence builds the
state space in a modular or incremental way.
------------------------------------------------------
-
Aguirre Nazareno (U. Nacional de Rio Cuarto, Argentina)
Analysing properties of execution in (Dyn)Alloy
(Joint work with Marcelo Frias, Juan Pablo Galeotti and Carlos Lopez
Pombo).
In this talk, we will present an extension of the Alloy specification
language, called DynAlloy. This extension allows for the definition of
actions and the specification of assertions regarding execution traces,
in the style of dynamic logic specifications. We will argue about the
benefits of adopting this extended version of Alloy for validating
dynamic properties of systems, and show that DynAlloy specifications are
also subject to automatic analysability. We will also compare the
analysis of Alloy specification of executions (via a coding of traces)
and DynAlloy specifications for a number of case studies.
Some papers:
M. Frias, J. Galeotti, C. Lopez Pombo and N. Aguirre, "DynAlloy:
Upgrading Alloy with Actions", in Proceedings of the 27th International
Conference on Software Engineering ICSE 2005, St. Louis, Missouri, USA,
May 2005.
M. Frias, C. Lopez Pombo, G. Baum, N. Aguirre y T. Maibaum, "Reasoning
about Static and Dynamic Properties in Alloy: A Purely Relational
Approach", in ACM Transactions on Software Engineering and Methodology
(TOSEM), Vol. 14, Nro. 4, ACM Press, October 2005.
M. Frias, G. Baum, C. Lopez Pombo, N. Aguirre y T. Maibaum, "Taking
Alloy to the Movies", in Proceedings de 12th International Symposium on
Formal Methods FM 2003, Lecture Notes in Computer Science 2805,
Springer-Verlag, Pisa, Italy, September de 2003.
------------------------------------------------------
---
- Leila Ribeiro
(U. Federal do Rio Grande do Sul, Porto Alegre, Brasil)
Home page
Assume guarantee with object-oriented graph grammars
The development of concurrent and reactive systems is gaining
importance since they are well-suited to modern computing platforms,
such as the Internet. However, the development of correct concurrent and
reactive systems is a non-trivial task. Object-based graph grammar
(OBGG) is a visual formal language suitable for the specification of
this class of systems. In previous work, a translation from OBGG to
PROMELA (the input language of the SPIN model checker) was defined,
enabling the verification of OBGG models using SPIN. In this paper we
extend this approach in two different ways: *(1)* the approach for
property specification is improved, enabling to prove properties not
only about possible OBGG derivations, but also about the internal state
of involved objects; *(2)* an approach is defined to interpret PROMELA
races as OBGG derivations, generating graphical counter-examples for
properties that are not true for a given OBGG model. Another
contribution of this paper is *(3)* the definition of a method for model
checking partial systems (isolated objects or a set of objects) using an
/assume-guarantee/ approach. A gas station system modeled with OBGGs is
used to illustrate the contributions.
==========================================
Paper reference:
Verifying Object-based Graph Grammars: an Assume-Guarantee Approach.
Fernando Luís Dotti, Leila Ribeiro, Osmar Marchi dos Santos, Fábio Pasini
Accepted for *Software and Systems Modeling, Springer, 2006*
DOI: 10.1007/s10270-006-0014-z
------------------------------------------------------
-
- Fabio Gadducci (U. Pisa, Italy)
Bisimulation from graphical encoding
(DPOs, cospans, relative POs and all that)
(slides)
The talk presents a personal recollection of recent results on the
synthesis of labelled transition systems (\textsc{lts}s) for process
calculi.
The starting point are the graphical techniques introduced by the
speaker and Ugo Montanari for modelling the reduction semantics of
nominal calculi: processes are mapped into graphs equipped with
suitable \emph{interfaces}, such that the denotation is fully abstract
with respect to the usual structural congruence. Moreover, the use of
non hierarchical graphs allows for the reuse of standard graph
rewriting theory and tools for simulating the reduction semantics of
the calculus, such as the \emph{double pushout} (\textsc{dpo})
approach and the associated concurrent semantics (which allows for
the simultaneous execution of independent reductions, thus implicitly
defining a non-deterministic concurrent semantics).
Another relevant fact is the use of \emph{cospan categories} for
simulating \textsc{dpo} rewriting, proposed by the speaker with Reiko
Heckel and Merce Llabres Segura. A cospan over the category of graphs
is just a pair of arrows with the same target. To each production in
the \textsc{dpo} style a pair of cospans is associated; then,
\textsc{dpo} rewrites are captured by an inductive closure of the set
of pairs of cospans, mirroring the dichotomy between the operational
and the algebraic description of term rewriting.
A third component is the fact that graph with interfaces are just an
instance of a cospan category (over the category of graphs). Hence,
graphs with interfaces are amenable to the synthesis mechanism based
on \emph{borrowed contexts} (\textsc{bc}s), proposed by Ehrig and
Koenig (which are in turn an instance of \emph{relative pushouts},
originally introduced by Milner and Leifer). The \textsc{bc} mechanism
allows the effective construction of an \textsc{lts} that has graphs
with interfaces as both states and labels, and such that the
associated bisimilarity is automatically a congruence.
So; since the category of cospans over graphs admits \textsc{rpo}s (as
proved by Sassone and Sobocinski), its choice as the domain of the
encoding for nominal calculi ensures that the synthesis of an
\textsc{lts} can be performed, and that a \emph{compositional}
observational equivalence is obtained. The talk discusses the analysis
of the \textsc{lts} distilled by exploiting the encoding of
\textsc{ccs} processes: besides offering some technical contributions
towards the simplification of the \textsc{bc} mechanism, the key
result of this joint work with Bonchi and Koenig is the proof that the
bisimilarity on (encondings of) processes obtained via \textsc{bc}s
coincides with the standard strong bisimilarity for \textsc{ccs}.
------------------------------------------------------
- Markus Roggenbach (U. Swansea, UK)
CSP-CASL: Semantics, Application, Tools
(slides)
Links to papers mentioned in my talk:
M.Roggenbach: CSP-CASL - A New Integration of Process
Algebra and Algebraic Specification. Theoretical Computer
Science, Volume 354, 42 - 71, Elsevier, 2006.
A.Gimblett, M.Roggenbach, H.Schlingloff: Towards a
Formal Specification of an Electronic Payment System in CSP-CASL.
In J.L.Fiadeiro, P.Mosses, F.Orejas (Eds): Recent Trends in
Algebraic Development Techniques. Proceedings of WADT
2004. LNCS 3423, 61-78, Springer, 2005.
Y.Isobe, M.Roggenbach: A generic theorem prover of CSP
refinement. In N.Halbwachs, L.D.Zuck (Eds): Tools and
Algorithms for the Construction and Analysis of
Systems. Proceedings of TACAS 2005. LNCS 3440, 108-123,
Springer, 2005.
------------------------------------------------------
=== Monday, June 5th
- Nicoletta Sabadini (U. Insubria, Varese-Como, Italy)
An algebra for automata: introduction
abstract
------------------------------------------------------
- Robert Walters (U. Insubria, Varese-Como, Italy)
Algebra for automata
abstract
------------------------------------------------------
- Razvan Diaconescu (IMAR, Bucarest, Roumanie)
Institution-independent model theory
The theory of institutions is a categorical abstract model
theory which formalises the intuitive notion of logical system,
including syntax, semantics, and the satisfaction between them.
Institutions constitute a model-oriented meta-theory on logics
similarly to how the theory of rings and modules constitute a
meta-theory for classical linear algebra.
Another analogy can be made with universal algebra versus groups,
rings, modules, etc.
By abstracting away from the realities of the actual conventional
logics, it can be noticed that institution theory comes in fact closer
to the realities of non-conventional logics.
The notion of institution arose within computing science in 1980's
in response to the population explosion of logics in use
there with the ambition of doing as much as possible at a level of
abstraction independent of commitment to any particular logic.
This mathematical paradigm is called `institution-independent'
computing science or model theory.
Since their definition by Goguen and Burstall, institutions
become a common tool in the study of algebraic specification theory
and can be considered as its most fundamental mathematical structure.
It is already an algebraic specification tradition to have an
institution underlying each language or system, in which all
language/system constructs and features can be rigorously explained as
mathematical entities.
Most modern algebraic specification languages follow this tradition,
including CASL or CafeOBJ.
While the goal of institution-independent formal specification has
been greatly accomplished in the algebraic specification literature,
recently there has been significant progress towards model theory too.
This responds to the feeling shared by some researchers that deep
concepts and results in model theory can be reached in a significant
way via institution theory.
The significance of institution-independent model theory is manifold.
First, it fulfils the main abstract model theory ideal by providing
an uniform generic approach to the model theory of various logics.
This is especially relevant for areas of knowledge involving a big
variety of formal logical systems, most of them unconventional.
An important example comes from computing science in general, and
algebraic specification in particular.
Related to this, institutions also provide an ideal platform for
exporting the rich and powerful body of concepts and methods developed
by conventional model theory to a multitude of unconventional logics.
While conventional `abstract' model theory of Barwise and Feferman
extends first order logic explicitly by abstracting only sentences and
satisfaction and leaving signatures and models concrete and
conventional, institutions axiomatise the relationship between models
and sentences by leaving them abstract.
Because of this lack of commitment to any particular logic,
institutions can be therefore considered as *true* form of
abstract model theory, some authors even calling this `abstract
abstract model theory'...
Then, institution-independent model theory has a special
methodological significance.
The institution-independent top-down way of obtaining a model
theoretic result, or just viewing a concept, leads to a deeper
understanding which is not suffocated by the (often irrelevant)
details of the actual logic and guided by structurally clean
causality.
A model theoretic phenomenon is thus decomposed into various
layers of abstract conditions, the concepts being defined and results
obtained at the *most appropriate level of abstraction*.
This contrasts with the traditional bottom-up approach in which the
development is done at a *given* level of abstraction.
Thus concepts come naturally as presumptive features that a ``logic''
might exhibit or not.
Hypotheses are kept as general as possible and introduced on a by-need
basis.
Results and proofs are modular and easy to track down, despite
sometimes very deep content.
Another reason for the strength of institution-independent methodology is that
institutions provide the most complete framework for abstract model
theory, emphasising the multi-signature aspect of logics by
considering signature morphisms and model reducts as primary concepts.
Finally, institution theory provide an efficient framework for
doing logic and model theory 'by translation or borrowing' via a
general theory of mappings (homomorphisms) between institutions.
For example, a certain property P which holds in an institution
I' can be also established in another institution I
provided that we can define a mapping I -> I' which `respects' P.
Apart of re-structuring known model theoretic methods,
institution-independent model theory has already produced two classes
of new concrete results.
The first class is represented by model theories for a multitude of
less conventional logics which did not have one properly.
Out of institution-independent model theory, even a relatively well
studied area like partial algebra gets with minimal effort (in fact
almost for free!) a well developed and coherent body of advanced model
theoretic concepts and results.
A second class of concrete applications is constituted by new results
in classical model theory obtained by institutional methods.
At the moment of writing this survey, we can report interpolation and
definability for numerous Birkhoff-style axiomatizable fragments of
classical logic and the elegant solution to the
interpolation conjecture for many sorted logic.
The former results reveal a strong causality relationship between
axiomatizability, on the one hand, and interpolation and definability,
on the other hand.
They also demount, or revise, the causal relationship between
interpolation and definability.
Maybe in this second class of applications we can also mention the
considerably facilitated access to highly non-trivial results in
classical model theory, such as Keisler-Shelah Isomorphism Theorem.
References:
Razvan Diaconescu, `Institution-independent Model Theory', Springer
New York, to appear 2007.
Razvan Diaconescu, Jewels of institution-independent model theory, in
`Algebra, Meaning and Computation', (a Festschrift in honour of
Professor Joseph Goguen), LNCS 4060, Springer Berlin Heidelberg, 2006.
------------------------------------------------------
- Lutz Schroeder (U. Bremen, Germany)
home page
PSPACE bounds for rank 1 modal logic (with Dirk Pattinson)
abstract and
slides
The
full paper is also online
@InProceedings{SchroderPattinson06,
author = {Lutz Schr{\"o}der and Dirk Pattinson},
title = {PSPACE Bounds for Rank 1 Modal Logics},
year = {2006},
editor = {Rajeev Alur},
booktitle = {Logic in Computer Science (LICS 06)},
organization = {IEEE},
note = {To appear},
}
------------------------------------------------------
---
- Christine Choppy (U. Paris-Nord, France)
Problem frames and UML description develoment (with Gianna Reggio)
(slides)
------------------------------------------------------
=== Tuesday, June 6th
- Michel Bidoit (LSV, ENS Cachan, France)
Proving Behavioral Refinements of COL-Specifications.
(joint work with Rolf Hennicker)
The COL institution (constructor-based observational logic) has been
introduced as a formal framework to specify both generation- and
observation-oriented properties of software systems. In this talk we
consider behavioral refinement relations between COL-specifications
taking into account implementation constructions. We propose a general
strategy for proving the correctness of such refinements by reduction
to (standard) first-order theorem proving with induction. Technically
our strategy relies on appropriate proof rules and on a lifting
construction to encode the reachability and observability notions of
the COL institution.
paper:
Michel Bidoit and Rolf Hennicker. Proving Behavioral Refinements of
COL-Specifications. In Kokichi Futatsugi, Jean-Pierre Jouannaud and
José Meseguer (eds.), Algebra, Meaning and Computation - Essays
dedicated to Joseph A. Goguen on the Occasion of His 65th Birthday,
San Diego, California, USA, June 2006, LNCS 4060. Springer, 2006.
------------------------------------------------------ - Andrea Corradini (U. Pisa, Italy)
Sesqui-pushout vs double-pushout rewriting (joint work with Tobias
Heindel, Frank Hermann and Barbara Koenig)
(slides)
Sesqui pushout (SqPO) rewriting (``sesqui'' means ``one and a half''
in Latin) is a new algebraic approach to abstract rewriting in any
category. It specializes to double pushout (DPO) rewriting and
captures the most relevant part of single pushout (SPO) rewriting,
namely the confict-free fragment.
A rewriting step in the SqPO approach is a double square diagram as in
the DPO approach: instead of closing the left square by a pushout
complement we require a ``most general'' pullback complement
satisfying a certain universal property which characterizes it
uniquely; as a novelty we can use non-left-linear rules to capture the
notion of cloning.
After demonstrating the expressiveness of the proposed approach
through a case study modeling an access control system, we analyze the
precise relationship between SqPO rewriting and the classical DPO and
SPO approaches. Finally we study the parallelism theory of SqPO
rewriting in adhesive categories, and prove the local Church-Rosser
theorem for SqPO.
------------------------------------------------------
- Till Mossakowski (U. Bremen, Germany)
Heterogeneous proofs, with an example about relation algebras.
(slides)
There is also a
movie with the tool demo as well as
papers and the tool
The framework of Grothendieck institutions, used as a technical
foundation for heterogeneous specification, can be extended
with proof theory. We here choose the simplest variant, namely
just entailment systems.
We discussion weak amalgamation and Craig interpolation as
prerequisites for completeness of structured theorem proving,
and discuss how to obtain these properties in Grothendieck
institutions. It turns out that they are better obtainable
in comorphism-based Grothendieck institutions, and hence
this is an argument in favour of using institution comorphisms
as the main technical device for heterogeneous specification,
and expressing institution morphisms as spans of comorphisms.
This has all been implemented in the heterogeneous tool set
Hets (http://www.tzi.de/cofi/hets).
We give a sample heterogeneous proof of the correctness of
the composition table for the qualitative spatial calculus
RCC8, when interpreted in topological spaces. The specification
mixes first- and higher-order logic, and the proof is
carried out with the first-order prover SPASS (which is used
to automatically prove many easy theorems) and the
higher-order prover Isabelle (which is used to interactively
prove a small number of more complex theorems).
------------------------------------------------------