Contents of this page: Local information | Proposed topics for discussion | Proposed talks | Bob Paige's retrospective | Einstein and Caputh | Photos from the meeting
I propose to devote a session to discussing what subjects might serve to provide a common focus for the Group, and the conditions under which we may hope to engage individual members in this work.
A trie is a search tree scheme that employs the structure of search keys to organize information. Tries were originally devised as a means to represent a collection of records indexed by strings over a fixed alphabet. Based on work by C.P. Wadsworth and others, R.H. Connelly and F.L. Morris generalized the concept to permit indexing by elements of an arbitrary monomorphic datatype. In this talk we go one step further and define tries and operations on tries generically for arbitrary first-order polymorphic datatypes. The derivation is based on techniques recently developed in the context of polytypic programming. It is well known that for the implementation of generalized tries nested datatypes and polymorphic recursion are needed. Implementing tries for polymorphic datatypes places even greater demands on the type system: it requires rank-2 type signatures and higher-order polymorphic nested datatypes. Despite these requirements the definition of generalized tries for polymorphic datatypes is surprisingly simple which is mostly due to the framework of polytypic programming.See the slides (one per page) or thumbnails (four per page).
Several generic programs for converting values from regular datatypes to some other format, together with their corresponding inverses, are constructed. The formats considered are shape+content, pretty printed strings, compact bit streams, and a database format. The different data conversion programs are constructed along with a proof that printing (from a regular datatype to another format) followed by parsing (from some format to a regular datatype) is the identity. The printers and parsers are described in PolyP, a polytypic extension of the functional language Haskell.In this talk I will concentrate on the polytypic shape+content data conversion program.
The main goal of this paper is to provide a common foundation to the theories of correctness of program transformations for a large variety of programming languages.(Joint work with Maurizio Proietti.)We consider the notion of ``rule set'' and the notion of ``inductive set'' which is defined by a rule set.
We also consider a class of transformations of rule sets, called ``rule replacements'', which replace an old rule set by a new rule set. These replacements can be viewed as generalizations of the most commonly used transformations, such as folding and unfolding.
We study two methods for proving the correctness of rule replacements, that is, for showing that the old rule set and the new rule set define the same inductive set. These methods are: (i) the Unique Fixpoint method, based on the well-foundedness property of the new rule set, and (ii) the Improvement method, based on the fact that the premises of the old rule set are replaced by premises which have `smaller' proofs w.r.t. a suitable well-founded relation. Our Unique Fixpoint and Improvement methods generalize many methods described in the literature which deal with transformation rules for functional and logic programming languages. Our methods have also the advantages of: (i) being parametric w.r.t. the well-founded relation which is actually used, and (ii) being applicable to rules with finite or infinite sets of premises.
Mescal is a not yet existing system for mechanical support in constructing and exploring formal theories, with an emphasis to calculational theories for software development. One way -- perhaps the best way -- to think about it is that Mescal is a WYSIWYG document editor for writing research papers, but that in addition the system is prepared to handle not only ``normal'' text, pictures, etcetera, but also formal terms from multiple formalisms, which may freely be mixed with the non-formal text. Such terms can, for example, stand for proofs (which may be dispatched to a proof checker), or programs (which may be dispatched to a syntax checker, or be run) -- all without the user ever going outside the editor. A proof might be a correctness proof for a program, and then, not only can it be verified that the proof is valid, but also that it is a proof for that program.See the slides (view in Landscape mode).In the first part of the talk I'll clarify the functional requirements for Mescal. Then I sketch the architecture of a kernel system to make this realizable.
In the field of reactive system programming, dataflow synchronous languages like Lustre or Signal offer a syntax similar to block-diagrams, and can be efficiently compiled into C code, for instance. Designing a system that clearly exhibits several ``independent'' running modes is not difficult since the mode structure can be encoded explicitly with the available dataflow constructs. However the mode structure is no longer readable in the resulting program; modifying it is error prone, and it cannot be used to improve the quality of the generated code. We propose to introduce a special construct devoted to the expression of a mode structure in a reactive system. We call it mode-automaton, for it is basically an automaton whose states are labeled by dataflow programs. We also propose a set of operations that allow the composition of several mode-automata (parallel and hierarchic compositions taken from Argos), and we study the properties of our model, like the existence of a congruence of mode-automata for instance, as well as implementation issues.(Joint work with Yann Rémond.)
We use elements of Kleene algebras as abstractions of the input-output semantics of nondeterministic programs. In the concrete Kleene algebra of homogeneous binary relations the operators of union and composition have been used for many years to define the so-called angelic semantics, which assumes that a program goes right when there is a possibility to go right. On the other hand, the corresponding demonic operators do the opposite: if there is a possibility to go wrong, a program whose semantics is given by these operators goes wrong.While there is no systematic way to calculate the relational abstraction of a while loop directly from the definition, it is possible to check the correctness of any candidate abstraction.
For functional programs, Harlan Mills has described a checking method known as the while statement verification rule. We generalize this rule to nondeterministic loops. This generalization is based on a refinement ordering that induces a semilattice with a join operator known as demonic join and a composition operator known as demonic composition. There the semantics of a while loop is given as a fixed point of a monotonic function involving the demonic operators.
While this research programme had originally been carried out by J. Desharnais and F. Tchier in the framework of binary homogeneous relations, we show in the present talk that all the results can be generalized to the setting of typed Kleene algebras introduced by B. Möller. In particular, the operator of forming the converse of an element is not needed.
The language AML is used to specify the semantics of architecture description languages, ADLs. It is a very primitive language, having declarations for only three constructs: elements, kinds, and relationships. Each of these constructs may be constrained via predicates in temporal logic. The essence of AML is the ability to specify structure and to constrain the dynamic evolution of that structure.Dynamic evolution concerns arise with considerable variation in time scale. One may wish to constrain how a system can evolve over its development lifecycle. Laws such as Minsky proposes or constraints as in Monroe's Armani system address such evolution concerns. Another approach to such concerns involves limiting systems' construction primitives to those from appropriate styles, such as in Wright and UniCon, or embodied by the choice to use C2. One may wish to constrain what implementations are appropriate; concerns for interface compatibility such as evidenced in SADL are then germane. And finally, one may want to constrain the ability of the architecture to be modified as it is running; languages such as Rapide and Darwin emphasize these issues. AML attempts to provide specification constructs that can be used to express all of these constraints without committing to which time scale will be used to enforce them.