School of Computer Science, University of St Andrews, April 26th to 28th 2017

LMS Keynote Lecture in Discrete Mathematics

Graph Isomorphism I

We discuss the group theoretic aspects at the core of recent progress on the Graph Isomorphism problem: a lemma (the “Unaffected Stabilizers Lemma”) and its algorithmic application (the “Local Certificates algorithm”). The latter produces a canonical structure, breaking the symmetry that has been the bottleneck for Luks’s classic (1980) Divide-and-Conquer strategy.

(Seminar) Graph Isomorphism II

We begin with further details of the group theory involved in the Graph Isomorphism algorithm, followed by a discussion of the combinatorial techniques. Classical (binary) and higher (k-ary) coherent configurations will be the central objects of the discussion.

Professor László Babai will also give a talk at the Scottish Combinatorics Meeting on the 25th of April, and has kindly agreed to give an introductory course for students on the 29th of April, organised by SUMS and STACS.

Confirmed Speakers

Perdita Stevens (University of Edinburgh) - Bisimulations, bidirectionality and the future of software engineering

A bidirectional transformation is a means of maintaining consistency between two - or more - data sources. In model driven development, the data sources are models: that is, abstract, usually graphical, representations of some aspects of a system. Model driven development has so far been successful in some contexts, but not others; a difficulty that remains is how to make it sufficiently agile, while retaining the advantages it presents where it works. The vision of future software engineering that I plan to put before you involves groups of people who are experts in something but not necessarily software developers, each working with their own model, and using bidirectional transformations to manage consistency between them. I will explain (something about) why that would be a good thing, what needs to happen for the vision to become reality, and what bisimulations have to do with it.

Conor McBride (University of Strathclyde) - What’s it like?

When we write programs or prove theorems about a programming language, the first thing we often do is write down a datatype for its syntax. But a syntax is not just any old datatype: there are are usually notions of “scope” and “sort” which impose structure on terms and drive the implementations of operations such as substitution. In this talk, I’ll adapt the technology of datatype-generic programming to the specific needs of types for syntax, showing how types and operations can be generated from a first class notion of grammar. Why keep a dog and bark yourself?

Felix Fischer (University of Glasgow) - Truthful Outcomes from Non-Truthful Position Auctions

The area of mechanism design is concerned with the development of algorithms that produce good outcomes given inputs from self-interested individuals. One of the stars of mechanism design theory is the Vickrey-Clarke-Groves (VCG) mechanism, which makes it optimal for each individual to truthfully reveal its input. In the real world, however, the VCG mechanism is used with surprising rarity and the mechanisms we actually find are non-truthful. An example of this phenomenon are position auctions used by internet search engines like Google and Bing to allocate space for advertisements displayed next to genuine search results. Here only non-truthful mechanisms have ever been used, and what in theory looks like a beginner’s mistake was a huge success in practice. An obvious advantage of the non-truthful mechanisms is that they use simpler payments, and it has been known for some time why this simplicity does not lead to chaos when participants decide strategically how to bid. We will see that the simplicity of payments also comes with a greater robustness to uncertainty on part of the search engines regarding the relative values of positions. The talk is based on joint work with Paul Dütting (LSE) and David C. Parkes (Harvard).

Edwin Brady (University of St Andrews) - State Machines All The Way Down

A useful pattern in dependently typed programming is to define a state transition system, for example the states and operations in a network protocol, as an indexed monad. We index each operation by its input and output states, thus guaranteeing that operations satisfy pre- and post-conditions, by typechecking. However, what if we want to write a program using several systems at once? What if we want to define a high level state transition system, such as a network application protocol, in terms of lower level states, such as network sockets and mutable variables?

In this talk, I will present an architecture for dependently typed applications based on a hierarchy of state transition systems, implemented in a generic data type ST. This is based on a monad indexed by contexts of resources, allowing us to reason about multiple state transition systems in the type of a function. Using ST, we show: how to implement a state transition system as a dependent type, with type level guarantees on its operations; how to account for operations which could fail; how to combine state transition systems into a larger system; and, how to implement larger systems as a hierarchy of state transition systems. I will illustrate the system with a high level network application protocol, implemented in terms of POSIX network sockets.

Mehrnoosh Sadrzadeh (Queen Mary University of London) Monoids, Vectors, and Tensors for Natural Language

Formalising different aspects of natural language has kept computational linguistics, logicians, and mathematicians busy for decades. On the syntactic side, we have Chomsky and generative grammars from 50’s, Ajdukiewicz and Bar-Hillel and functional grammars from 30’s, Lambek and residuated monoids from 50’s (to name a few). On the semantic side, we have the seminal work of Montague on lambda calculus and higher order logics. Recently, distributional semantics, which is based on Firth and Harris’s insights, has become a successful method of representing meanings of words. This method takes advantage of vectors and linear algebra to reason about the information encoded in large quantities of data, available in the form of text and corpora of documents. I will present the work of my colleagues and I on extending distributional semantics from words to phrases and sentence. In order to realise this passage, we use a notion of grammatical compositionality formalised by Lambek, also employed in other categorial grammars such as the CCG. Our model relies on the notion of tensors and operations from multilinear algebra. I will present the theory behind the model and its applications to natural language tasks such as phrase and sentence similarity, paraphrasing, classification, and entailment. In many occasions the predications of the tensor models better those of simple vector addition and multiplication.