List of attendees |
Members: | Members (continued): | Observers: |
Technical presentations |
Simon Marlow, The Haxl Project at Facebook. (Mon 2013-10-14, 09:05-09:45)
Writing code in an environment as large and complex as Facebook involves talking to many different back-end services, including databases, search engines, caches, and so forth. Each of these data sources has different characteristics and requires different access patterns to make best use of its resources. Typical front-end code will need to access many of these resources concurrently, but the programmer doesn't want to be concerned with the details of how best to access each resource. Moreover, we want to be able to write our code in a modular way and yet have concurrent accesses to the back-end services automatically batched and overlapped for us.See slides.
This talk describes a system that we're building using Haskell that allows the front-end programmer writing business logic to access all the back-end services in a concise and consistent way, while the system handles batching and overlapping of requests to multiple data sources behind the scenes. The programming model is one of implicit concurrency: there's no fork or async operation, all external data access is implicitly performed in parallel where possible.
Stephanie Weirich, Programming Up-to Congruence. (Mon 2013-10-14, 09:45-10:30)
When dependently-typed languages can include potentially non-terminating computation in types, the usual "normalize and compare" algorithm for type checking is unavailable. To prevent the type check from diverging, we must explicitly mark what expressions should be reduced in types. To prevent these annotations from becoming unwieldy, this talk proposes the use of Congruence Closure as a part of type inference. This talk develops a declarative specification of a type system that uses congruence closure, and discusses important challenges that must be resolved to show the completeness of the algorithm with respect to this specification.See slides.
Ulf Norell, Fun with Data Kinds. (Mon 2013-10-14, 11:10-11:45)
GHC is steadily moving towards being more and more dependently typed. The recent data kinds extension lets you lift simple datatypes to the kind level, giving you a much nicer type system for type-level programs. In this talk I play with these facilities, implementing a well-typed representation of simply typed lambda terms with polymorphic constants. I then write QuickCheck generators for generating random well-typed terms, requiring among other things well-typed unification. Mostly things work out nicely, but in some places the lack of explicit type application gets a little bit annoying.See code.
Francois Pottier, The design of Mezzo. (Mon 2013-10-14, 11:50-12:35)
Mezzo is a typed programming language in the ML family whose static discipline controls aliasing and ownership. This rules out certain mistakes, including representation exposure and data races, and opens up new opportunities, such as gradual initialization or (more generally, and somewhat speculatively) the definition and enforcement of object protocols.See slides, paper, and Mezzo Web site.In this talk, I will explain the basic design of Mezzo and illustrate it with several examples. Although the core static discipline forbids sharing mutable objects, Mezzo offers several mechanisms for working around this restriction. I will discuss two of them, namely locks and adoption & abandon. Both can be viewed as dynamic mechanisms for the control of ownership. (This is joint work with Jonathan Protzenko and Thibaut Balabonski.)
Dimitrios Vytiniotis, Wireless programming for hardware dummies. (Mon 2013-10-14, 15:50-16.25)
Software radios have enabled a new wave of research on PHY/MAC design. However, existing frameworks for programming software radios require programmers to manually heavily optimize their code or target specialized hardware platforms, making it difficult to modify existing code and experiment with new implementations. In this work we present a new programming language and compiler for programming low-latency stream processors that improves on this situation. The language includes a collection of monadic combinators that are designed to naturally distinguish between control and data paths in a stream computation. We present the language, its compilation, discuss the automatic optimizations that are enabled and report on preliminary results of a CPU-based 802.11a implementation.See slides.
Eijiro Sumii, Environmental Bisimulation and Its Open Problems. (Mon 2013-10-14, 16:25-17:00)
abstractSee slides.
Steve Zdancewic, Vellvm: Verifying Transformations of the LLVM IR. (Mon 2013-10-14, 17:15-17:55)
The Low-Level Virtual Machine (LLVM) compiler provides a modern, industrial-strength SSA-based intermediate representation (IR) along with infrastructure support for many source languages and target platforms. Much of the LLVM compiler is structured as IR to IR translation passes that apply various optimizations and analyses.See slides, paper 1. and paper 2.
In this talk, I will describe the Vellvm project, which seeks to provide a formal framework for developing machine-checkable proofs about LLVM IR programs and translation passes. I'll discuss some of the subtleties of modeling the LLVM IR semantics, including nondeterminism and its use of SSA representation. I'll also describe some of the proof techniques that we have used for reasoning about LLVM IR transformations and give two example applications: (1) formal verification of the SoftBound pass, which hardens C programs against memory safety errors, and (2) the mem2reg transformation, which promotes stack-allocated temporaries to registers and converts "trivial" SSA programs to "minimal, pruned" SSA programs.
Vellvm, which is implemented in the Coq theorem prover, provides facilities for extracting LLVM IR transformation passes and plugging them into the LLVM compiler, thus enabling us to create verified optimization passes for LLVM and evaluate them against their unverified counterparts. Our experimental results show that fully verified and automatically extracted implementations can yield competitive performance.
(This is joint work with Jianzhou Zhao and Milo Martin, both at Penn, and Santosh Nagarakatte, at Rutgers University).
Rich Hickey, Clojure: What Just Happened? (Mon 2013-10-14, 17:55-18:45)
This talk will explore Clojure's adoption and popularity. What factors are appealing to its users?See slides.
Nate Foster, NetKAT: Semantic Foundations for Networks. (Tue 2013-10-15, 09:10-09:55)
Recent years have seen growing interest in high-level languages for programming networks. But the design of these languages has been largely ad hoc, driven more by the needs of applications and the capabilities of network hardware than by foundational principles. The lack of a semantic foundation has left language designers with little guidance in determining how to incorporate new features, and programmers without a means to reason precisely about their code. This talk will present NetKAT, a new language for programming networks that is based on a solid mathematical foundation and comes equipped with a sound and complete equational theory. I will describe the design of NetKAT, including primitives for filtering, modifying, and transmitting packets; operators for combining programs in parallel and in sequence; and a Kleene star operator for iteration. I will show that NetKAT is an instance of a canonical and well-studied mathematical structure called a Kleene algebra with tests (KAT) and prove that its equational theory is sound and complete with respect to its denotational semantics. Finally, I will discuss practical applications of the equational theory including syntactic techniques for checking reachability properties, proving the correctness of compilation and optimization algorithms, and establishing a non-interference property that ensures isolation between programs.See slides and technical report.
Iavor Diatchki, Modular Type Checking with Decision Procedures. (Tue 2013-10-15, 09:55-10:35)
In this talk I observe that we may view the type checker for a sophisticated type-system as a collection of cooperating decision procedures, each specializing in a single aspect of the type-system. Based on this observation, I propose a mechanism for integrating existing decision procedures with the type-checker. The method is illustrated with examples in the theory of linear arithmetic.See slides.
Dan Licata, Git as a HIT. (Tue 2013-10-15, 11:00-11:50)
Homotopy type theory is an extension of the type theories underlying proof assistants such as Agda and Coq based on interpretations of type theory in homotopy theory. One of the main ideas in homotopy type theory is that proofs of equality may have computational content, and influence how a program runs. Higher inductive types (HITs) are a new type former that allows programmers to define types whose proofs of equality have computational content. HITs were originally developed to model basic spaces and constructions in homotopy theory, but we are beginning to find additional applications in other domains. In this talk, I will introduce HITs by way of a simple programming example: modeling a simple notion of "patch" to a document, inspired by version control systems such as Git and Darcs.See slides.
Matt Might, Gödel Hashing. (Tue 2013-10-15, 11:50-12:30)
The critical bottleneck in many flow analyzers is a subsumption test between abstract stores. I present a hashing/encoding scheme for compactly representing lattices such that subsumption tests are fast on modern hardware for sparse structures. The scheme is inspired by the encodings of propositions in Godel's incompleteness theorem, and I generalize it to sets, multisets, tuples, sums and maps.See slides and paper.
Alejandro Russo, An information-flow control calculus for non-security experts. (Tue 2013-10-15, 14:00-14:40)
Information-flow security has recently become relevant to preserve confidentiality of data in popular platforms (e.g. mobile platforms, web services, etc.). A great series of work has been devoted to dynamically control information-flow control using programming language semantics. Sometimes, extending such systems to add new language construct is not trivial, or worse, it might be error prone. In this light, we present the first steps towards a monadic calculus to enforce the non-interference policy, i.e., that secrets are not leaked into public entities. The calculus is based on a floating-label system in such a way that explicitly associates security checks to read or write effects of a given operation. This allows, almost automatically, to synthesize security checks for new features (e.g. references, communication channels, etc.) based on their effects.See slides.
(This is a work in progress with Pablo Buiras, Deian Stefan, and David Mazierès.)
Rinus Plasmeijer, Task Oriented Programming. (Tue 2013-10-15, 14:40-15:10)
Task-Oriented Programming (TOP) is a novel programming paradigm for the construction of distributed systems where users work together on the internet. When multiple users collaborate, they need to interact with each other frequently. TOP supports the definition of tasks that react to the progress made by others. With TOP, complex multi-user interactions can be programmed in a declarative style just by defining the tasks that have to be accomplished, thus eliminating the need to worry about the implementation detail that commonly frustrates the development of applications for this domain. TOP builds on four core concepts: tasks that represent computations or work to do which have an observable value that may change over time, data sharing enabling tasks to observe each other while the work is in progress, generic type driven generation of user interaction, and special combinators for sequential and parallel task composition.See slides and paper.The iTask system is an implementation of TOP. It is an (in Clean) Embedded Domain Specific Language offered via a combinator library. It can be downloaded here.
Benjamin Pierce, An Architecture for Pervasive Information Flow. (Tue 2013-10-15, 15:50-16:30)
The CRASH/SAFE project is building a network host that is highly resiliant to cyber-attack. One pillar of the design is pervasive mechanisms for tracking information flow. At the lowest level, the SAFE hardware offers fine-grained tagging and efficient support for propagating and combining tags on each instruction dispatch. The operating system virtualizes these generic facilities to provide the information-flow abstract machine on which user programs run. In this talk, we'll take a guided tour of (a simplified model of) the SAFE hardware and software and an end-to-end proof of noninterference for this model.See slides.
Roberto Ierusalimschy, An Overview of Lua. (Tue 2013-10-15, 16:35-17:20)
Lua is a programming language developed at the Catholic University in Rio de Janeiro that came to be the leading scripting language in video games. Lua is also used extensively in embedded devices, such as set-top boxes and TVs, and other applications like Adobe Lightroom and Wikipedia. In this talk we will see where Lua is being used, why Lua is being used, and how is Lua.See slides.
Rich Hickey, The Database as a Value. (Tue 2013-10-15, 17:25-18:15)
This talk will be a very brief introduction to Datomic.See slides.
Conor McBride, World enough and time. (Wed 2013-10-16, 08:30-09:20)
This talk proposes to refine the traditional presentation of dependent type theories with a general notion of information flow by labelling each colon in the typing rules with a "world", so that each world has its own quantifier. Worlds are preordered, with the variable rule ensuring usage world is above binding world. Types are constructed in a maximal world. Every upward-closed set of worlds induces a Church-to-Curry-style erasure. A types-above-values preorder allows us to make a proper separation of ghost from program variables when writing specifications in types. Haskell, meanwhile, has types and values in incomparable worlds: we can consider adding a genuine Pi-type by adding a new world below both.See whiteboard 1, whiteboard 2, whiteboard 3, and transcription (thanks to Francois Pottier).
Olin Shivers, An IR for a lambda-calculus compiler. (Wed 2013-10-16, 09:20-10:00)
abstractSee slides and paper.
Oleg Kiselyov, Generators and Pretty Printing. (Wed 2013-10-16, 10:30-11:10)
abstractSee slides and paper.
Philip Wadler, Blame calculus for Javascript - where it goes wrong. (Wed 2013-10-16, 11:10-11:55)
We introduce the blame calculus, which adds the notion of blame from Findler and Felleisen's contracts to a system similar to Siek and Taha's gradual types and Flanagan's hybrid types. We characterise where positive and negative blame can arise by decomposing the usual notion of subtype into positive and negative subtyping, and show that these recombine to yield naive subtyping. Naive typing has previously appeared in type systems that are unsound, but we believe this is the first time naive subtyping has played a role in establishing type soundness.See slides and paper.
Don Syme, Extreme Type Providers. (Wed 2013-10-16, 14:30-15:10)
I introduce F# 3.0 type providers, both conceptually and through demonstration, and discuss some of the themes in data integration into a functional programming language.See slides.
Ralf Hinze, Naturality, but not as you know it. (Wed 2013-10-16, 15:10-15:40)
A function is natural if it commutes with mapping; it is strongly natural if it commutes with filtering. In this talk I show how this property can be put to good use in correctness proofs (key-based sorting and searching).See slides and paper.
John Hughes, Coverage criteria for functional programs. (Wed 2013-10-16, 15:40-16:00)
abstractSee slides and paper.
Didier Remy, Ambivalent Types for Principal Type Inference with GADTs. (Thu 2013-10-17, 08:50-09:30)
GADTs have been in use for many years in Haskell, but have only been made available for OCaml programmers about a year ago. I will describe the solution that is currently in use in OCaml, which is orthogonal to the previous solutions that had been proposed for type inference with GADTs and could complement them.See slides and website.
Our solution is based on ambivalent types, types that also carry the information of which equalities can be used implicitly to assign them to expressions. Ambivalent types can only be used in a typing context makes all the equalities contained in an ambivalent type valid.
The talks focuses on the orthogonality between our solution and those based on aggressive propagation of type annotations and how ambivalent types could benefit to Haskell as well.(Joint work with Jacques Garrigues.)
Martin Odersky, The Trouble with Types. (Thu 2013-10-17, 09:30-10:10)
It's hard to find a topic that divides programming language enthusiasts more than the issue of static typing. To about one half of the community, types are a godsend. They provide structure to organize systems, prevent whole classes of runtime errors, make IDEs more helpful, and provide a safety net for refactorings. To the other half, static types are burdensome ceremony or worse. They limit the freedom of expression, are in the way of rapid prototyping, make program code bulkier, and making sense of an opaque type error message is often an exercise in frustration.See slides and paper.
The question is: What needs to happen to make static typing less controversial than it is today? Ideally, a type system should refuse as many erroneous programs as possible, but accept all correct programs one would want to write. It should provide useful documentation about interfaces but get out of the way otherwise. It should be simple to define, with a clear semantics and a guarantee of correctness. But it should also be easy to learn and natural to use. The problem is that each of these criteria can be met in isolation but addressing them jointly requires many difficulttradeoffs.
In the talk I give an outline some of the main categories of static type systems, as well as some new developments, and discuss the tradeoffs they make.
Amal Ahmed, Verifying Compilers using Multi-Language Semantics. (Thu 2013-10-17, 10:30-11:15)
Existing verified compilers are proved correct under a closed-world assumption, i.e., that the compiler will only be used to compile whole programs. But most software systems today are comprised of components from low-level libraries to code written in different languages compiled to a common target. We are pursuing a new methodology for verifying correct compilation of program components, while formally allowing linking with target code of arbitrary provenance. A key challenge is how to define the right notion of compiler correctness for this setting. We argue that compiler correctness should require that if a source component s compiles to a target component t, then t linked with some arbitrary target code t' should behave the same as s interoperating with t'. To express the latter, we formalize an appropriate semantics of interoperability between source components and target code. We are using this approach to verify a three-pass compiler from System F to TAL. In this talk, I focus on how we've had to extend our TAL to reason about components (and not just individual basic blocks) so we can define appropriate interoperability between assembly and higher-level languages.See slides.
John Launchbury, Accelerating Secure Share Computations. (Thu 2013-10-17, 11:15-11:50)
abstractSee slides and paper.
Iavor Diatchki, GHC type-checking demo. (Thu 2013-10-17, 11:50-12:00)
David Mazières, , Decentralized Information Flow Control with the LIO library. (Fri 2013-10-18, 09:00-09:45)
abstractSee slides. To get the code use
git clone
http://www.scs.stanford.edu/~dm/repos/wg28-2013.git
.
Corky Cartwright, Reconciling inheritance and subtyping. (Fri 2013-10-18, 09:45-10:25)
According to folklore among programming language researchers, the identification of subtyping and inheritance in mainstream OO languages like Java and C# is misguided, despite the fact that simple versions of these type systems have been proved sound relative to operational semantics for these languages. This paper presents a denotational model of OOP akin to Cardelli's record model that justifies the typing conventions in mainstream OO languages and breaks typing rules based on record subtyping. In other words, given what we believe is a proper model of mainstream nominal OOP, subtyping is inheritance and the usual structural typing rules are unsound.See slides and extended abstract.
Fritz Henglein, Regular Expression Parsing, Greedily and Stingily. (Fri 2013-10-18, 11:00-11:40)
We present a new algorithm for producing greedy parses for regular expressions (REs) in a semi-streaming fashion. It executes in time O(mn) for REs of size m and input strings of size n and outputs a compact bit-coded parse tree representation. It improves on previous algorithms by: operating in only 2 passes; using only O(m) words of random-access memory (independent of n); requiring only kn bits of sequentially written and read log storage, where k < m is the number of alternatives and Kleene stars in the RE; and not requiring the input string to be stored at all. Previous RE parsing algorithms do not scale linearly with input size, require substantially more log storage, employ 3 passes where the first consists of reversing the input, or do not or are not known to produce a greedy parse. The empirical evaluation indicates that our unoptimized C-based prototype is surprisingly competitive with RE tools not performing full parsing, such as Grep. The development of this work has been driven by a functional regular-expressions-as-types approach.See slides.
(Joint work with Bjørn Grathwohl, Lasse Nielsen and Ulrik Rasmussen.)
Dave MacQueen, My new/old agenda. (Fri 2013-10-18, 11:40-12:00)
abstractSee slides and paper.
Ralf Hinze (email: ralf.hinze@cs.ox.ac.uk)