Home |
Contact |
Cv |
Research |
Teaching |
My research interests lie at the interface between computer science and logic, in particular proof theory. More specifically:
Research projects I am currently involved in:
Research projects I was involved in:
Most of the papers available from this document appear in print, and the corresponding copyright is held by the publisher.
While the papers can be used for personal use, redistribution or reprinting for commercial purposes is prohibited.
We give a geometric condition that characterizes the differential nets having a finitary interpretation in finiteness spaces: visible acyclicity. This is based on visible paths, an extension to differential nets of a class of paths we introduced in the framework of linear logic nets. The characterization is then carried out as follows: the differential nets having no visible cycles are exactly those whose interpretation is a finitary relation. Visible acyclicity discloses a new kind of correctness for the promotion rule of linear logic, which goes beyond sequent calculus correctness.
We give a semantic account of the execution time (i.e. the number of cut elimination steps leading to the normal form) of an untyped MELL net. We first prove that: 1) a net is head-normalizable (i.e. normalizable at depth 0) if and only if its interpretation in the multiset based relational semantics is not empty and 2) a net is normalizable if and only if its exhaustive interpretation (a suitable restriction of its interpretation) is not empty. We then give a semantic measure of execution time: we prove that we can compute the number of cut elimination steps leading to a cut free normal form of the net obtained by connecting two cut free nets by means of a cut-link, from the interpretations of the two cut free nets. These results are inspired by similar ones obtained by the first author for the untyped lambda-calculus.
We study the notion of solvability in the resource calculus, an extension of the lambda-calculus allowing to model resource consumption. Since this calculus is non-deterministic two different notions of solvability arise, one optimistic (angelical, may) and one pessimistic (demoniac, must). We give a syntactical, operational and logical characterization for the may-solvability and only a partial characterization of the the must-solvability. Finally we discuss the open problem of a complete characterization of the must-solvability.
The paper contains the first complete proof of strong normalization (SN) for full second order linear logic (LL): Girard's original proof uses a standardization theorem which is not proven. We introduce sliced pure structures (sps), a very general version of Girard's proof-nets, and we apply to sps Gandy's method to infer SN from weak normalization (WN). We prove a standardization theorem for sps: if WN without erasing steps holds for an sps, then it enjoys SN. A key step in our proof of standardization is a confluence theorem for sps obtained by using only a very weak form of correctness, namely acyclicity slice by slice. We conclude by showing how standardization for sps allows to prove SN of LL, using as usual Girard's reducibility candidates.
We study full completeness and syntactical separability of MLL proof nets with the mix rule. The general method we use consists first in addressing the two questions in the less restrictive framework of proof structures, and then in adapting the results to proof nets. At the level of proof structures, we find a semantical characterization of their interpretations in relational semantics, and we define an observational equivalence which is proved to be the equivalence induced by cut elimination. Hence, we obtain a semantical characterization (in coherent spaces) and an observational equivalence for the proof nets with the mix rule.
Finding a denotational semantics for higher order quantum computation is a long-standing problem in the semantics of quantum programming languages. Most past approaches to this problem fell short in one way or another, either limiting the language to an unusably small finitary fragment, or giving up important features of quantum physics such as entanglement. In this paper, we propose a denotational semantics for a quantum lambda calculus with recursion and an infinite data type, using constructions from quantitative semantics of linear logic.
Probabilistic coherence spaces (PCOH) yield a semantics of higher-order probabilistic computation, interpreting types as convex sets and programs as power series. We prove that the equality of interpretations in PCOH characterizes the operational indistinguishability of programs in PCF with a random primitive. This is the first result of full abstraction for a semantics of probabilistic PCF. The key ingredient relies on the regularity of power series and introduces, in denotational semantics, techniques coming from Calculus. Along the way to the theorem, we design a weighted intersection type assignment system giving a logical presentation of PCOH.
The Taylor expansion of lambda-terms, as introduced by Ehrhard and Regnier, expresses a lambda-term as a series of multi-linear terms, called simple terms, which capture bounded computations. Normal forms of Taylor expansions give a notion of infinitary normal forms, refining the notion of Böhm trees in a quantitative setting. We give the algebraic conditions over a set of normal simple terms which characterize the property of being the normal form of the Taylor expansion of a lambda-term. From this full completeness result, we give further conditions which semantically describe normalizable and total lambda-terms.
The category Rel of sets and relations yields one among the simplest denotational semantics of Linear Logic (LL). It is known that Rel is the biproduct completion of the Boolean ring. We consider the generalization of this construction to an arbitrary continuous semiring R, producing a cpo-enriched category which is a semantics of LL, and its (co)Kleisli category is an adequate model of an extension of PCF, parametrized by R. Specific instances of R allow us to compare programs not only with respect to "what they can do", but also "in how many steps" or "in how many different ways" (for non-deterministic PCF) or even "with what probability" (for probabilistic PCF).
We consider the call-by-value $\lambda$-calculus extended with a may-convergent non-deterministic choice and a must-convergent parallel composition. Inspired by recent works on the relational semantics of linear logic and non-idempotent intersection types, we endow this calculus with a type system based on the so-called Girard's second translation of intuitionistic logic into linear logic. We prove that a term is typable if and only if it is converging, and that its typing tree carries enough information to give a bound on the length of its lazy call-by-value reduction. Moreover, when the typing tree is minimal, such a bound becomes the exact length of the reduction.
We study the probabilistic coherence spaces — a denotational semantics interpreting programs by power series with non negative real coefficients. We prove that this semantics is adequate for a probabilistic extension of the untyped $\lambda$-calculus: the probability that a term reduces to a head normal form is equal to its denotation computed on a suitable set of values. The result gives, in a probabilistic setting, a quantitative refinement to the adequacy of Scott's model for untyped $\lambda$-calculus.
We study the resource calculus, an extension of the $\lambda$-calculus allowing to model resource consumption. We achieve an internal separation result, in analogy with Bohm's theorem of $\lambda$-calculus. We define an equivalence relation on the terms, which we prove to be the maximal non-trivial congruence on normalizable terms respecting $\beta$-reduction and sum idempotency. It is significant that this equivalence extends the usual $\eta$-equivalence and is related to Ehrhard's Taylor expansion — a translation mapping terms into series of finite resources.
The resource calculus is an extension of the lambda-calculus allowing to model resource consumption. Namely, the argument of a function comes as a finite multiset of resources, which in turn can be either linear or reusable, giving rise to non-deterministic choices, expressed by a formal sum. Using the lambda-calculus terminology, we call solvable a term that can interact with the environment: solvable terms represent meaningful programs. Because of the non-determinism, different definitions of solvability are possible in the resource calculus. Here we study the optimistic (angelical, or may) notion, and so we define a term solvable whenever there is a simple head context reducing the term into a sum where at least one addend is the identity. We give a syntactical, operational and logical characterization of this kind of solvability.
We study the resource calculus — the non-lazy version of Boudol's lambda-calculus with resources. In such a calculus arguments may be finitely available and mixed, giving rise to nondeterminism, modelled by a formal sum. We define parallel reduction in resource calculus and we apply, in such a nondeterministic setting, the technique by Tait and Martin-Löf to achieve confluence. Then, slightly generalizing a technique by Takahashi, we obtain a standardization result.
Recently Ehrhard and Regnier have introduced Differential Linear Logic, DILL for short — an extension of the Multiplicative Exponential fragment of Linear Logic that is able to express non-deterministic computations. The authors have examined the cut-elimination of the promotion-free fragment of DILL by means of a proofnet-like calculus: differential interaction nets. We extend this analysis to exponential boxes and prove the Cut-Elimination Theorem for the whole DILL: every differential net that is sequentializable can be reduced to a cut-free net.
Linear Logic is based on the analogy between algebraic linearity (i.e. commutation with sums and with products with scalars) and the computer science linearity (i.e. calling inputs only once). Keeping on this analogy, Ehrhard and Regnier introduced Differential Linear Logic (DILL) — an extension of Multiplicative Exponential Linear Logic with differential constructions. In this setting, promotion (the logical exponentiation) can be approximated by a sum of promotion-free proofs of DILL, via Taylor expansion. We present a constructive way to revert Taylor expansion. Precisely, we define merging reduction — a rewriting system which merges a finite sum of DILL proofs into a proof with promotion whenever the sum is an approximation of the Taylor expansion of this proof. We prove that this algorithm is sound, complete and can be run in non-deterministic polynomial time.
Differential interaction nets (DIN) have been introduced by Thomas Ehrhard and Laurent Regnier as an extension of linear logic proof-nets. We prove that DIN enjoy an internal separation property: given two different normal nets, there exists a dual net separating them, in analogy with Böhm's theorem for the lambda-calculus. Our result implies in particular the faithfulness of every non-trivial denotational model of DIN (such as Ehrhard's finiteness spaces). We also observe that internal separation does not hold for linear logic proof-nets: our work points out that this failure is due to the fundamental asymmetry of linear logic exponential modalities, which are instead completely symmetric in DIN.
We give a geometric condition that characterizes the MELL proof structures whose interpretation is a clique in the non-uniform coherent spaces: visible acyclicity. We define the visible paths and we prove that the proof structures which have no visible cycles are exactly those whose interpretation is a clique. It turns out that visible acyclicity has also nice computational properties, especially it is stable under cut reduction.
We prove the conservation theorem for differential nets -- the graph-theoretical syntax of the differential extension of Linear Logic (Ehrhard and Regnier's DiLL). The conservation theorem states that the property of having infinite reductions (here infinite chains of cut elimination steps) is preserved by non-erasing steps. This turns the quest for strong normalization into one for non-erasing weak normalization, and indeed we use this result to prove strong normalization of simply typed DiLL (with promotion). Along the way to the theorem we achieve a number of additional results having their own interest, such as a standardization theorem and a slightly modified system of nets, DiLLdeltarho.
Lambdamu-calculus has been built as an untyped extension of Parigot's lambdamu-calculus in order to recover Böhm theorem which was known to fail in lambdamu-calculus. An essential computational feature of Lambdamu-calculus for separation to hold is the unrestricted use of abstractions over continuations that provides the calculus with a construction of streams. Based on the Curry-Howard paradigm Laurent has defined a translation of lambdamu-calculus in polarized proof-nets. Unfortunately, this translation cannot be immediately extended to Lambdamu-calculus: the type system on which it is based freezes Lambdamu-calculus's stream mechanism. We introduce stream associative nets (SANE), a notion of nets which is between Laurent's polarized proof-nets and the usual linear logic proof-nets. SANE have two kinds of par (hence of tensor), one is linear while the other one allows free structural rules (as in polarized proof-nets). We prove confluence for SANE and give a reduction preserving encoding of Lambdamu-calculus in SANE, based on a new type system introduced by the second author. It turns out that the stream mechanism at work in Lambdamu-calculus can be explained by the associativity of the two different kinds of par of SANE. At last, we achieve a Böhm theorem for SANE. This result follows Girard's program to put into the fore the separation as a key property of logic.