List of attendees |
Note: many of the links are dead, which simply means that I haven't yet received the material.
Technical presentations |
Philip Wadler, Contracts and Unions: A Demonstration (Mon 2014-08-11, 08:30-09:10)
Contracts, introduced by Findler and Felleisen, are widely used to integrate dynamically and statically typed languages. It has long been known how to implement the conjunct of two contracts (fail if either fails) but not how to implement the disjunct of two contracts (fail if both fail). I present code that provides an implementation of disjunction, suggested by my MSc student, Jakub Zalewski.
See files Intro.hs and Conc.hs, which contain introductory and concluding remarks; file Lambda.hs, which contains the BNFC-meta syntax definition of a small dynamically typed language with contracts; and file ContractUnion.hs, which provides an interpreter for the language.
Dan Licata, Cubical Type Theory (Mon 2014-08-11, 09:10-9:50)
In this talk, we describe some of the ingredients that might go into a cubical type theory, which is one approach to giving a computational interpretation of homotopy type theory.
See slides.
Robby Findler, A monadic library for reasoning about running times in Coq (Mon 2014-08-11, 9:50-10:30)
abstract
See slides.
Adam Chlipala, Lambda: The Ultimate Specification (program synthesis in Coq) (Mon 2014-08-11, 11:00-11:40)
Proof assistants can enable new styles of modularity in programming. Here I argue for separation of functionality and optimizations, where language-enforced encapsulation prevents the optimization part from breaking functional behavior. We have implemented this idea in Fiat, a Coq library for program synthesis by stepwise refinement. As an example, I demonstrate how Fiat can automatically synthesize efficient functional programs from SQL-style relational specifications plus choices of data structure.
See slides.
Yaron Minsky, Reading code as if it mattered (Mon 2014-08-11, 13:30-14:10)
This talk describes the design of a new system for managing the process of developing, reviewing and releasing software, called Iron. Iron runs on top of a traditional DVCS, in our case Mercurial, and is responsible for managing a patch-oriented review process on top of that DVCS that precisely tracks what people have read and need to read. Notably, Iron makes it easy to deal with merges, including conflicted merges.
In addition, Iron provides a hierarchical relationship between different features that allows handling of multiple release processes within the same codebase, and also manage more complex workflows like handling concurrent review of features where one feature depends on the other.
See blog posts on designing and scaling Iron.
Matthias Felleisen, The network calculus (Mon 2014-08-11, 14:10-14:50)
Marketplace is a concurrent language able to express communication, enforce isolation, and manage resources. Network-inspired extensions to a functional core represent imperative actions as values, giving side-effects locality and enabling composition of communicating processes.
Collaborating programs are grouped within task-specific virtual machines (VMs) to scope their interactions. Conversations between programs are multi-party (using a publish/subscribe medium), and programs can easily participate in many such conversations at once.
Marketplace makes presence notifications an integral part of pub/sub. Programs react to presence and absence notifications that report the comings and goings of their peers. Presence serves to communicate changes in demand for and supply of services, both within a VM and across nested VM layers. Programs can give up responsibility for maintaining presence information and for scoping group communications to their containing VM.
See project site.
Norman Ramsey, Jumbo ML: designing a language for teaching CS2 (Mon 2014-08-11, 15:30-16:10)
Matt Might, Deletion from Okasaki's Red-Black Trees: A Functional Pearl (Mon 2014-08-11, 16:10-16:50)
Existing deletion algorithms for Okasaki's purely functional red-black trees (Kahrs, 2001) violate the simplicity and spirit with which Okasaki implemented insertion. I present a deletion algorithm that slightly modifies ordinary functional binary search tree deletion. The main idea is to add two transient colors that exist only during deletion: double black and negative black. A deletion may introduce a double black, which is then propagated toward the root via "bubbling." Bubbling introduces negative blacks. Adding two cases to Okasaki's original balance method rotates negative blacks away immediately.
See slides, draft paper, and blog post.
Stephanie Weirich, Programming up to congruence, again (Tue 2014-08-12, 09:00-09:40)
Oleg Kiselyov, Lambda: the ultimate syntax-semantics interface (Tue 2014-08-12, 09:40-10:20)
Spreadsheets and Matlab are popular because they let domain experts write down a problem in familiar terms and quickly play with potential solutions. Natural-language semanticists have a better tool. It displays truth conditions, infers types, simplifies terms, and computes yields. Its modularity facilities make it easy to try fragments out, scale them up, and abstract encoding details out of semantic theories.
This tool is not just a niche experiment among semanticists, but a proven combination of techniques using a mature, general-purpose programming language. This tool was recently created by functional programmers, but they are unaware of its application to semantics just as most semanticists are. The talk briefly describes the hands-on course that taught this application simultaneously to linguists and programmers, so as to bridge the two communities. We worked our way from propositional logic and context-free grammars to dynamic and continuation treatments of quantification and anaphora.
This course on computational Montagovian semantics has been presented together with Chung-chieh Shan at several Summer schools on logic, language and informatics.
See notes.
Dimitrios Vytiniotis, Optimizing reconfigurable pipelines in ZIRIA (Tue 2014-08-12, 10:40-11:20)
Software-defined radio (SDR) brings the flexibility of software to the domain of wireless protocol design, promising both an ideal platform for research and innovation and the rapid deployment of new protocols on existing hardware. Implementing modern wireless protocols on existing SDR platforms often requires careful hand-tuning of low-level code, however, which can undermine many of the advantages of software.
I give an overview of ZIRIA, a high-level dataflow language and compiler for programming SDRs. The ZIRIA compiler produces performant implementations of wireless PHY code (e.g. 802.11a/g) and in this talk I focus on how the compiler optimizes the widths of the data processing pipelines.
See slides.
Steve Zdancewic, Type- and Example-Driven Program Synthesis (Tue 2014-08-12, 11:20-12:00)
In this talk I will describe some ongoing work in which we're attempting to synthesize recursive programs that process ML-style algebraic datatypes. Following previous work on inductive program synthesis, we draw on ideas from proof theory to enumerate well-typed programs that satisfy a given set of input--output examples. I'll demonstrate our prototype implementation of these ideas, which can successfully synthesize functions like 'fold' over lists from just a few examples.
This is joint work with Ph.D. student Peter-Michael Osera, and supported in part by the NSF ExCAPE project.
See slides.
Derek Dreyer, GPS: Navigating weak memory with ghosts, protocols, and separation (Tue 2014-08-12, 1:30-2:10)
Weak memory models formalize the inconsistent behaviors that one can expect to observe in multithreaded programs running on modern hardware. In so doing, however, they complicate the already-difficult task of reasoning about correctness of concurrent code. Worse, they render impotent the sophisticated formal methods that have been developed to tame concurrency, which almost universally assume a strong (i.e. sequentially consistent) memory model.
This paper introduces GPS, the first program logic to provide a full-fledged suite of modern verification techniques---including ghost state, protocols, and separation logic---for high-level, structured reasoning about weak memory. We demonstrate the effectiveness of GPS by applying it to challenging examples drawn from the Linux kernel as well as lock-free data structures. We also define the semantics of GPS and prove in Coq that it is sound with respect to the axiomatic C11 weak memory model.
Nate Foster, Deciding NetKAT Equivalence using Derivatives (Tue 2014-08-12, 2:10-2:50)
NetKAT is a new domain-specific language and logic for specifying and verifying network packet-processing functions. It consists of Kleene algebra with tests (KAT) augmented with specialized primitives for modifying and testing packet headers and encoding network topologies. Previous work developed the design of the NetKAT language and its standard semantics, proved the soundness and completeness of the logic, defined a PSPACE algorithm for deciding equivalence, and presented several practical applications.
This talk will present the coalgebraic theory of NetKAT, including a specialized version of the Brzozowski derivative, as well as a new efficient algorithm for deciding the equational theory using bisimulation. The coalgebraic structure admits an efficient sparse representation that results in a significant reduction in the size of the state space. We discuss the details of our implementation and optimizations that exploit NetKAT's equational axioms and coalgebraic structure to yield significantly improved performance. We present results from experiments demonstrating that our approach is competitive with state-of-the-art tools on several benchmarks.
See slides.
John Reppy, Compiling NESL for GPUs (Tue 2014-08-12, 3:30-4:10)
GPUs provide super-computer performance at commodity prices, but programming them requires programming to a complicated hardware model. Guy Blelloch’s NESL language was designed to program nested-data-parallel algorithms on wide-vector SIMD architectures. As such, it is reasonable candidate for a high-level programming language for GPUs. This talk describes some on-going work to build a second-generation NESL to CUDA compiler.
See slides.
Jack Dennis, Programming Models for Massively Parallel Computation (Tue 2014-08-12, 4:10-4:50)
Vendors of processing chips have found that placing multiple processing cores in a single silicon chip is a better way to employ chip area than continuing the quest for greater single thread performance. The prospect is that processing chips with hundreds of processing cores and beyond will become the standard parts of future computer systems. Users of these new chips face the problem of organizing the computations they wish to perform as collections of parallel activities that communicate with one another. For this, programmers often use MPI, the Message Passing Interface, which has become an accepted standard software library for writing high performance application codes. Within the MPI model, one can express an arbitrary collection of components connected by message passing links; however, the MPI library and the goal of achieving high processor utilization restrict use of MPI to computations that match the Bulk Synchronous Parallel (BSP) model.
A new programming model is needed for parallel computing -- one that responds to the demand by modern applications for flexible resource management. A programming model defines the interface between application code and the runtime and hardware infrastructure that applies processing and memory resources to execute the computation. To provide flexible resource management, there must be a unit of task scheduling and a unit of memory allocation such that a task may be executed by any processor and accesses any data block. This requires use of unique identifiers for codelets (blocks of executable code) and data blocks. Such a set of unique identifiers forms a global virtual memory which requires the system (runtime and hardware) to map unique identifiers to physical locations. To achieve full support for modular program construction, the system must also support creation and recovery of data blocks, which requires a mechanism for garbage collection.
Eijiro Sumii, Normalizing Structured Graphs (Wed 2014-08-13, 1:30-2:10)
See slides.
Amal Ahmed, Fully abstract closure conversion in the presence of state and effects (Wed 2014-08-13, 2:10-2:50)
See slides.
Fritz Henglein, Soundness is not sufficient (Wed 2014-08-13, 3:30-4:10)
We propose a number of conceptual quality criteria of static analyses that go beyond soundness. We illustrate them in a decision problem setting and identify some open problems.
See slides.
John Hughes, Finding more bugs and good examples with QuickCheck (Thu 2014-08-14, 09:00-09:40)
This talk addresses two problems that arose in a large industrial testing project using QuickCheck. The first problem is that QuickCheck tends to report the same bug in every run—the "most likely bug" is usually found first, and even when QuickCheck generates a random test case that fails because of another bug, the shrinking search for a minimal failing example often encounters the most likely bug, and thus results in the same minimal failing test. So QuickCheck can be used to find ONE bug in a system. When testing code for customers, it's desirable to create test reports that report MANY bugs. Hitherto this required manual work to either fix the first bug, so a second can be found, or to UNFIX the specification to accept the first bug's behaviour. I explain an automated method to avoid restesting for the same bugs, which has reduced the time typically required to create a good test report from a week to a few hours.
The second problem is that customers rarely understand QuickCheck specifications. One way to explain a specification is via salient examples, and I present work-in-progress to characterize what "salient" means, and show some initial results in which we are able to generate quite a good set of examples automatically. Interestingly, this problem is NOT the same as generating a good test suite—it seems to be a new, and important problem which has hitherto not been identified.
David MacQueen, Early history of ML (Thu 2014-08-14, 09:40-10:20)
A discussion of the early history of ML (1973-1989), covering LCF/ML, Cardelli's VAX ML, Edinburgh ML, and the design of Standard ML (SML '90).
Arvind, Processor Specification without a Memory Model (Thu 2014-08-14, 11:00-11:40)
For a modular proof of correctness of an implementation of a memory model one needs to specify the processor correctness criteria independently of the memory system. We show how this can be accomplished in implementing Sequential Consistency in a shared memory systems with highly speculative out-of-order processors and memory systems with a hierarchy of caches.
Joint work with Murali Vijayaraghavan and Adam Chlipala.
See slides.
Conal Elliott, From Haskell to Hardware via CCCs (Thu 2014-08-14, 11:40-12:20)
The talk describes a prototype whole-program compiler from Haskell to low-level hardware descriptions (currently in Verilog). The compiler works by monomorphizing, miscellaneous other transformations, and conversion to the vocabulary of cartesian closed categories, as captured in a small collection of Haskell type classes. One instance of those classes provides an interpretation as parallel circuits.
See slides.
Mark Jones, Optimization using a Monadic Intermediate Language (Thu 2014-08-14, 1:30-2:10)
abstract
See slides.
Lennart Augustsson, A report from the Trenches, again (Thu 2014-08-14, 2:10-2:50)
abstract
See slides.
Benjamin Pierce, The Spider Calculus (Fri 2014-08-15, 9:00-9:40)
We explore a new class of process calculi, collectively called spider calculi, in which processes inhabit the nodes of a directed graph, evolving and communicating by local structural mutations. We identify a kernel spider calculus that is both minimal and expressive. Processes in this kernel calculus can construct arbitrary finite graphs, encode common data structures, and implement the communication primitives of the π-calculus.
See slides.