Contents of this page: Administrative matters | Technical presentations(Last updated 2nd May 2003.)
The chair had tried to identify such trends for WG2.1, but had not been able to distill a small number of trends from the group's diverse interests. It is also not yet clear whether Haeberer's successor as TC2 chair will continue with this initiative. Therefore little progress has been made.
WG2.8 chair John Hughes has proposed an alternative for that group: members identified the best presentations from the last few years, and the presenters would be invited to write short (six-page) papers about those topics. WG2.8 has identified seven such presentations. We discussed the possibility of doing something similar in WG2.1.
There was some discussion about whether six pages is long enough for an interesting and useful paper. Opinion was divided on the relative benefits of short surveys and longer, more detailed explanations, given the stated aim of bringing the TC2 working groups together.
It was decided that the group would come to the next meeting prepared to discuss best presentations. Members should review the minutes of the last few years' meetings and select their favorite presentations; a decision on which to select will be made at the next meeting. (To this end, the secretary will send copies of the minutes to members.) By that time it will hopefully be clear whether the TC2 initiative will continue.
The first issue of the Higher Order and Symbolic Computation is going to appear very soon. The second issue is still in the making. He strongly encourages other members of the group to contribute to the book and send their contributions, which can be either scientific or personal, to him. The (not firm) deadline is the end of June this year.
We will return to sending formal invitations. This will also allow the chair to send reminders to attend to members and observers who have missed several meetings.
The conclusion was that maintaining the document is not a lot of work, and it is more convenient for new observers to browse through than dozens of different web pages. Therefore the profile will be retained.
We were welcomed into a relaxed haven from the bustling metropolis. There we commemorated the reign of head ( ([QE split succ]) 1 ), and were awed by a live demonstration of time-tested tools. It was established that they can produce harmonious results in the hands of an expert, whereas when wielded by novices the output leaves considerable room for refinement.
Surrounded by automata, the group found itself in a strangely familiar environment conducive to the exploration of various avenues of thought.
Let R be a regular language. For strings s and t define(This problem description is repeated on the first page of Cohen's slides.)s → t iff exists x, y, z, r such that xyz = s and xrz = t and r in RShow that {t such that 0 →* t} is regular.
No summary received yet.(The slides are available.)
Control systems and reactive ones should be studied in a common framework. Both kinds of systems essentially involve interactions between dynamics, viz. games. The apparent difference between discrete and continuous time proves misleading. This is shown by fundamental similarities between continuous- and discrete-time dynamics and games.The slides are available; see also working paper 837 FAV-16.
No summary received yet.However, a handwritten summary is available as a working document.
While there are many special ways to specify analysis of complex data, relational rules simplify and generalize them. Such rules are used explicitly or implicitly in business processes, biological analysis, analysis of computer programs, and many other applications. Programming each kind of analysis by hand is a daunting recurring task, but doing the analysis using a general evaluator program on any particular set of rules incurs a significant overhead and does not provide good performance guarantees. We have developed a method for automatically generating efficient specialized programs for analysis problems specified using Datalog, a general logic database query language based on relational rules. The method provides precise guarantees for both the running time and space consumption for the generated analysis programs.This is joint work with Scott Stoller. See working document 835 FAV-14.This result is based on a general transformational method that makes computation proceed in an iterative and incremental fashion, analogous to integration by differentiation in calculus. The method also exploits sophisticated structures for storing and accessing complex data. Both time and space complexities of the generated programs are optimal in a precisely defined sense. We apply the method to a number of analysis problems, some with improved time complexities and all with greatly improved algorithm understanding and greatly simplified complexity analysis.
I gave an overview of program synthesis, and verification and validation work in the Automated Software Engineering group at NASA Ames Research Center. This included a description of a system for synthesis of Kalman Filter state estimation code, certified synthesis of integrated Kalman Filter code and Stateflow diagrams, work on runtime analysis, and efforts to infuse this technology into NASA missions.
No summary received yet.
A long time ago, we created a set of conceptually simple but challenging problems, and for a sequence of meetings a large part was devoted to presenting diverse approaches to these problems, thereby highlighting strengths and weaknesses of various methods and formalisms. I propose to make a new repository of challenge problems that can serve as a source of inspiration for the Group.(The slides are available.)
No summary received yet.(The slides are available.)
Roland Backhouse and I have recently had a research proposal accepted for a project called Datatype-Generic Programming, which is to develop a novel mechanism for parametrization based on higher-order (but not quite parametric) polymorphism. The project has yet to start, but in this talk I described our intentions and expectations.(The slides are available.)
There is a well-known "folklore" functional program for the Sieve of Eratosthenes. I presented a calculational derivation, based on the universal property of streams.(The slides are available.)
A Discrete Event System (DES) is a dynamic system whose evolution is governed by the instantaneous occurrence of physical events. DESs arise in many areas such as robotics, manufacturing, communication networks, and transportation. They are often modelled by languages or automata over an alphabet of symbols denoting the events. In 1987, Ramadge and Wonham initiated a very successful approach to the control of DES, which was subsequently extended by themselves and others.I have presented a new view of the concept of controllability in control theory by adopting a formalization based on Kleene Algebra (KA). This formalization incorporates additional parameters and this makes it possible to state and solve new optimization problems. For instance, one can not only find the largest controllable sublanguage of a given language acting as the specification (a classical problem), but also the largest DES smaller than a given DES for which a given language (the specification) is controllable. The new results hold for models other than languages, namely path algebras and algebras of relations. Finally, because the results are proved in a calculational style based on an axiomatization of KAs, they can more easily be checked than those of standard presentations.
See www.nicta.com.au.
Probabilistic programming refers to programming in which the probabilities of the values of variables are of interest. For example, if we know the probability distribution from which the inputs are drawn, we may calculate the probability distribution of the outputs. We may introduce a programming notation whose result is known only probabilistically. A formalism for probabilistic programming was introduced by Kozen [1981], and further developed by Morgan, McIver, Seidel and Sanders [1996]. Their work is based on the predicate transformer semantics of programs; it generalizes the idea of predicate transformer from a function that produces a boolean result to a function that produces a probability result. The work of Morgan et al. is particularly concerned with the interaction between probabilistic choice and nondeterministic choice, which is required for refinement.(See working document 827 FAV-6.)The term predicative programming has occasionally been used [Hehner 1984, Hoare 1985] to describe programming according to a first-order semantics, or relational semantics, although it has also been used [Bibel 1975] for another purpose. The purpose of this paper is to show how probabilistic reasoning can be applied to the predicative style of programming.
Planware II automatically generates scheduling code from a high-level model of a complex resource problem. Resources are modeled using a hierarchical state machine formalism that includes constraints on states and transitions. The formalism also supports provided and offered services per state. The generator analyzes the models to instantiate schemas for backtrack search and constraint propagation algorithms, and then performs optimizations and datatype refinements. The IDE for Planware II is based on Sun's NetBeans environment.This is joint work with Marcel Becker and Limei Gilham. (The slides are available.)
Evolving Specifications provide a mechanizable framework for specifying, developing, and reasoning about complex systems. The framework combines features from algebraic specifications, abstract state machines, and refinement calculus, all couched in a categorical setting. The key ideas are to treat abstract states as specifications and abstract transitions as morphisms running in the inverse direction. On this foundation we build a category of behavioral specification with morphisms for refinement and colimits for composition. The framework, including a generator of C code, is partially implemented in the Epoxi extension of the Specware system.This is joint work with Dusko Pavlovic. The slides are available; see also working documents 838 FAV-17 and 839 FAV-18.
Doug Smith and I are working (with others from Kestrel) on the application of Specware concepts to imperative programming. Doug presented a solution to the garbage-collection problem using their especs (which base on Abstract State Machines), while I am basing the solution on monads. It was interesting to see these two approaches side-by-side.(The slides are available.)
Reductions that aggregate fine-grained transitions into coarser transitions can significantly reduce the cost of automated verification, by reducing the size of the state space. We propose a reduction that can exploit common synchronization disciplines, such as the use of mutual exclusion for accesses to shared data structures. Exploiting them using traditional reduction theorems requires checking that the discipline is followed in the original (i.e., unreduced) system. That check can be prohibitively expensive. This paper presents a reduction that instead requires checking whether the discipline is followed in the reduced system. This check may be much cheaper, because the reachable state space is smaller.This is joint work with Ernie Cohen. See working document 836 FAV-15.
No summary received yet.(The slides are available.)
At Meeting 56, I gave necessary and sufficient conditions for when a partial function can be written as a fold or an unfold. I had only ugly set-theoretic proofs of the results. I now have elegant relational proofs of these and some related results (about injections, the duals of partial functions), on which I reported.(The slides are available.)
Objects provide better encapsulation of data, separating "what" from "how". "What" can be done on data can be classified as queries and updates, where queries compute information on data, and updates change data. "How" to implement queries and updates can vary significantly. Straightforward implementations are clearer and more modular but can have poor performance, because queries are often repeated and many are easily expensive, whereas updates can be frequent and are usually small. Sophisticated implementations can have good performance but are less clear and modular, because results of expensive queries must be stored and updates to data must also maintain the query results incrementally.This is joint work with Scott Stoller and Tom Rothamel. (The slides are available.)To produce better OO programs, it is essential to provide systematic methods and automated support for generating such sophisticated implementations. This talk describes how analysis of queries and updates can be used for computing queries incrementally with respect to updates. We describe three kinds of analyses, for recognizing expensive queries, determining primitive updates, and ensuring performance improvement, respectively. In particular, we describe how pointer analysis can be used in these analyses and optimizations.
No summary received yet.