My Most Influential Work

John Hughes

This is an overview of my most influential research. Nowadays it's possible for a researcher to find out which papers are influential, thanks to CiteSeer, the computing researcher's equivalent of Google. Citeseer analyses online articles and the way they refer to each other; a paper which is often cited is clearly influential. Now, CiteSeer's data is far from perfect---for example, I appear under several different names, and not all the J. Hugheses refer to me---but with a bit of work, quite a reliable picture emerges. So here is the result: an overview of my dozen most frequently cited publications, as of May 2005.

My research area is functional programming, a subject I fell in love with in 1975 when I learned LISP. My fundamental goal is to make programming easier--- I like my programs to be short, beautiful, and elegant, and I hate drudgery. Functional programming languages take a lot of the drudgery out of programming by their very nature, and moreover provide an excellent framework for programmers to automate even more drudgery. I like to take a difficult programming problem and solve it once, never to worry about it again, which can be done by writing a library that captures the solution so that it can be re-used. Designing libraries which are sufficiently flexible that they can be reused widely is thus another of my interests. Finally, programs which are beautiful and elegant aren't always fast! That has led me to work on ways to make them fast---program analysis, transformation, and so on---so that I the programmer can have my cake and eat it too: program elegantly, and still end up with fast code.

S. Peyton Jones and J. Hughes. Haskell 98: A Non-strict, Purely Functional Language. February 1999. http://www.haskell.org/onlinereport/
1071 citations: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52
71 citations per year

By far my most frequently cited work is the definition of the programming language Haskell. This is the work of many people, of course, but I have been involved throughout the process. I joined the Haskell committee at its formation in 1987, together with representatives for all the leading research groups in this kind of functional programming. At that time, every research group had their own compiler and their own language, and we saw a great benefit in joining forces. The first language definition was ready in 1990 (and the first compiler, six weeks later), and the "final" report (which I co-edited) was published early in 1999. Clearly, we succeeded wildly---Haskell has had a tremendous influence on the research community, even outside the field of functional programming. Each link above is to a CiteSeer entry for one of the versions of the report, each page with many citations.

Designing a programming language is a huge job, and the devil is in the details. Every aspect is debated to and fro, alternatives considered, arguments made, and finally, consensus reached---which makes it hard in most cases to identify individual contributions. One aspect which I have influenced, though, is memory use. The memory requirements of functional programs can be quite unpredictable, leading to "space leaks" (a phenonemon I discovered) whereby programs use quite unreasonable amounts of memory. The fix is usually quite simple: a tiny modification at just the right place in the program. There are countless ways, though, in which the language design can hinder the programmer from making just the change which is needed to plug the leak. Haskell's design does not hinder programmers in that way, and I can take credit for that.

John Hughes. Why functional programming matters. Computer Journal, 32(2):98{ 107, 1989.
213 citations: 1 2 3 4 5 6
13 citations per year

This paper shows how two key features of functional languages---higher-order functions (which take other functions as arguments) and lazy evaluation (which defers computations until their results are really needed)---combine to let programmers break algorithms into small, flexible, reusable parts. It argues that functional languages support the development of reusable software much better than conventional languages, in other words. At the time I wrote it, researchers were more likely to emphasize mathematical reasoning and the absence of side-effects---but it's difficult to sell a programming paradigm on the things it lacks! The perspective in my paper has influenced many researchers since, and can perhaps be said to lie partly behind a focus in recent years on "domain specific embedded languages". These are flexible libraries of functions that permit programmers to solve problems of a certain class just by combining library functions, in effect working in a special purpose programming language for that particular kind of task.

I wrote the paper in 1984 as a post-doc, but misjudged its significance completely. I thought it would be unpublishable, because it contained no difficult research results, just a manifesto and some nice programming examples. So I circulated it privately to friends, who passed it on to others, and soon I found it turning up in the most unexpected places. Finally, after five years, I was invited to submit it to the Computer Journal.

J. Hughes. Generalizing monads to arrows. Science of Computer Programming, (37):67-111, 2000.
33 citations: 1 2 3 4 5 6
6.6 citations per year

This much more recent paper is also concerned with program libraries, or rather their interfaces---although perhaps that isn't evident from the title! It builds on one of the biggest breakthroughs of the 1990s in functional programming, Wadler's introduction of monads for structuring functional programs. Monads have several uses, but one of the most important is as building blocks for program libraries. Libraries built on monads may do quite different things, but they share a part of their interface. That makes it possible to write "meta-libraries" that play well with any such library, thus providing shared functionality that library authors need not duplicate, and library users need not repeatedly learn how to use for each new library. Drudgery eliminated!

For all their usefulness, monads are limited to a specific kind of library. In this paper, I introduced a richer and more widely applicable interface, the "arrow" interface, drawing inspiration from category theory. I showed via a number of examples that the extra generality is valuable, including a library for programming dynamic web-pages which has inspired further work in that area. Since I wrote the paper, arrows have turned out to be particularly useful for process-like abstractions. They're used, for example, for functional reactive programming, and are now supported via special syntax in Haskell implementations.

Despite the fearsome title, the paper is quite an easy read, and does not assume that the reader knows what a monad is.

J. Hughes, L. Pareto, and A. Sabry. Proving the Correctness of Reactive Systems Using Sized Types. In Conference Record of 23rd ACM SIGPLAN Symposium on Principles of Programming Languages (POPL '96), pages 410--423, 1996.
54 citations: 1 2 3
6 citations per year
J. Hughes and L. Pareto. Recursion and dynamic data-structures in bounded space: Towards embedded ML programming. In Proc. of ICFP'99, pages 70-81. ACM Press, 1999.
34 citations: 1
5.7 citations per year

This pair of papers are concerned with the sizes of dynamic data-structures, such as lists or trees---the very stuff of functional programming.  It's in the nature of such data-structures that their sizes are hard to predict in advance, and that makes them unsuitable for use in embedded systems. No-one wants their anti-lock brakes to run out of memory at a critical moment! Today, dynamic data-structures simply aren't used in such applications, but these papers suggest a way to make that possible. They show how the sizes of such data-structures can be recorded in their types, in such a way that the type-checker can ensure that the size at run-time does not exceed the size in the type. There are other ways to achieve the same effect, but our system is quite simple. The first paper concerns the sizes of individual data-structures, while the second shows that the same approach can guarantee a bound on the total memory needed by a program.

P. Wadler and R. J. M. Hughes. Projections for strictness analysis. In Conf. on Functional Programming and Computer Architecture, LNCS vol. 274. Springer, 1987.
100 citations: 1 2 3
5,6 citations per year
R.J.M. Hughes. Backwards analysis of functional programs. In D. Björner, A.P. Ershov, and N.D. Jones, editors, Partial Evaluation and Mixed Computation, pages 187--208. Elsevier Science Publishers B.V. (North-Holland), 1988.
49 citations: 1 2
2.9 citations per year

Lazy evaluation---deferring computations until their results are really needed---brings benefits to the programmer, as I argued in Why Functional Programming Matters, but it does impose a certain run-time cost, because of the mechanism needed first to delay a computation, and then to discover that it is needed and perform it. There was therefore a lot of interest in the 1980s in strictness analysis, a compile-time analysis to identify computations whose result is always needed, and which can therefore be performed early. To this day, this is an important optimisation in Haskell compilers. The first methods for strictness analysis could identify function parameters that were always needed, but not components of data-structures with the same property, which I saw as a serious deficiency. I wrote a series of papers addressing that problem, by developing a framework for "backwards analysis" of functional programs, or propagating information about the context in which expressions were used, in other words. I used a rather clumsy mathematical model, however. Phil Wadler saw that, at least for strictness analysis, my contexts could be modelled by domain projections, and we wrote a paper together developing a strictness analysis method for data-structures based on that model. A strictness analyser based on these principles is used in the Glasgow Haskell compiler today. Projections also turned out to be a useful mathematical model for other analyses involving data-structures, especially those connected with "partial evaluation".

John Hughes. The design of a pretty-printing library. In J. Jeuring and E. Meijer, editors, Advanced Functional Programming, volume 925. Springer Verlag, 1995.
46 citations: 1 2 3 4 5
4.6 citations per year

Another paper on library design. In this case, the library addresses the problem of pretty-printing data-structures, especially trees of one sort or another.  The problem arises in any software that generates program code: of course, we want the generated code to have a readable layout. Inside compilers, programs are transformed into a succession of more and more efficient forms, and the compiler developer must be able to print these out with a readable layout, so as to be able to see what the compiler is doing. Even some HTML generators save their HTML with a readable layout, so that a human can understand, and if necessary edit, the HTML that has been generated. Thus choosing a good layout for data is a recurring problem, but a complicated one to solve well. This is an ideal problem to attack via a program library.

My paper develops such a library with a very simple interface: five functions are all that is needed to construct "documents" that can be laid out in a wide variety of readable ways. The programmer who needs to write a pretty-printer need not think about the choice of layout at all, only use these five functions to describe the possible layouts of the data. The library then chooses the best layout, within the constraints of the page width.

This library simplified pretty-printing so much that it was rapidly adopted for all the various pretty-printers in the Haskell compilers. But the paper is also interesting for the way that the library was developed. I began from a simple mathematical specification of how the functions should behave, and then showed two different systematic methods by which implementations of the library could be developed. Thus the library was not only useful, but proven correct. Moreover, the implementations finally derived were so intricate, that it is hard to see that they could have been written using only intuition as a guide. The optimisations that make the library efficient, also make the code incomprehensible---but in this case, that doesn't matter, because it is derived, not written.

This paper established pretty-printing as an interesting topic of study in the functional programming community, and has inspired a succession of other authors to improve on my original design and implementation. Moreover, the development method has also been adopted and adapted by others to solve other problems.

John Hughes. Lazy memo-functions. In Jean-Pierre Jouannaud, editor, Functional Programming Languages and Computer Architecture, number 201 in Lecture Notes in Computer Science, pages 129--146, Nancy, France, September 1985. SpringerVerlag.
79 citations: 1 2 3 4
4 citations per year

Memoisation is an old technique for optimising a function, which works by recording all the arguments the function is called with, and all the results it returns, and then reusing results when the function is called with the same argument a second time. If this happens often, then memoisation can dramatically improve the performance of a program. Although memoisation is an old technique, it wasn't obvious that it could be applied at all to lazy functions, because in order to compare a new argument with old ones, it must be evaluated before the function is called---thus undoing the effect of lazy evaluation. To solve that problem, I proposed a new kind of memo-function in which results were reused only for calls with exactly the same argument---the same pointer in memory---rather than for any equal argument (that is, any data-structure with the same shape and contents). That idea brought a number of improvements: it made memo-functions more efficient, because old and new arguments could be compared by a simple pointer-comparison; it made memo-functions work better with lazy evaluation, because at least the components of arguments need not be evaluated to compare their addresses; it made it possible to garbage collect the memo-tables---that is, to discard old arguments and results, in the knowledge that the function can never be called with exactly the same argument again. I also showed that my "lazy" memo-functions, despite not sharing as much computation as the original sort, were still useful, by giving a number of attractive programming examples that the technique made possible.

John Hughes. Type specialisation for the lambda calculus; or, a new paradigm for partial evaluation based on type inference. In Olivier Danvy, Robert Gluck, and Peter Thiemann, editors, Partial Evaluation, number 1110 in Lecture Notes in Computer Science, Dagstuhl, Germany, February 1996. Springer-Verlag.
33 citations: 1 2 3 4 5 6 7 8
3.7 citations per year

This paper is in the area of program specialisation, or partial evaluation as it is also called. A specialiser takes a program as input, along with some, but not all, of its inputs, and performs the computations that now depend only on known inputs, generating a residual program in which these computations no longer appear. Residual programs can be much faster than the original, if a large part of the work can be done given only the known inputs. The approach is most useful when the same residual program is run many times, for different values of the remaining inputs, since then the cost of doing the program specialisation can be amortized over many runs. A typical example might be a generic equation solver, which computes solutions given a set of equations and values for coefficients. If we need to solve the same equations repeatedly, for different coefficient values, then we can specialise the equation solver for that set of equations, and derive a fast residual program which just inputs coefficients and outputs solutions.

Clearly, a specialised program might be able to use specialised types, for example replacing variable-sized data structures such as lists by fixed size ones such as arrays, or replacing union types by one of alternatives, if the alternative can be predicted at specialisation time. But prior to my work, types could only be specialised in rather limited and ad hoc ways. In this paper I not only developed a unified framework for type specialisation, but I showed that all specialisation could be phrased in terms of type specialisation, hence the "new paradigm" of the title. My approach was modular, which enabled me to combine individual specialisation features in ways which had not been possible earlier, and I was also able to demonstrate "optimal specialisation of a typed interpreter", an open problem of almost a decade's standing. This last contribution in particular has stimulated other researchers to find other ways to solve the same problem.

K. Claessen, J. Hughes. QuickCheck: A lightweight Tool for Random Testing of Haskell Programs. International Conference on Functional Programming, ACM, pp 268--279, 2000. See also www.cs.chalmers.se/~rjmh/QuickCheck. DBLP
18 citations: 1
3.6 citations per year

The first of a series of papers Koen and I wrote on our random testing tool, QuickCheck. QuickCheck allows the programmer to specify properties of the code under test, then generates a large number of random cases to test the properties in. The property language is both concise and flexible, giving the programmer control over the distribution of test data, the ability to collect statistics, and the ability to generate test cases satisfying complex invariants. Because QuickCheck enables specifications to be used directly for testing, it has attracted a lot of attention both in and outside the functional programming community, from people who like to reason formally about their code.

John Hughes. A Distributed Garbage Collection Algorithm. In Jean-Pierre Jouannaud, editor, Proc. of ACM Conference on Functional Programming Languages and Computer Architecture, pages 256--272, 1985. Lecture Notes in Computer Science, Vol. 201. ACMDBLP
66 citations: 1 2 3
3.3 citations per year

Garbage collection algorithms work by following pointers from live program variables: any data structure which can be reached by following any chain of pointers from a live variable may potentially be accessed by the program in the future; data structures which cannot be reached in this way can never be used again, and the storage they occupy can be recycled and used for other purposes. Garbage collection is more difficult in a distributed system, if pointers are allowed to point at data-structures on other computing nodes, because following pointer chains now requires a distributed algorithm. Moreover, data structures cannot be deleted until all pointer chains have been followed: until then, it could be that following one more pointer will suddenly reveal that the data structure in question is accessible, and thus still needed. This seems to require that all the computing nodes in a distributed system agree that following pointer chains is complete.

I was the first to link this problem to the problem of distributed termination detection -- how the nodes of a distributed system can discover that the system as a whole has completed some task. This is complicated because a single node still working on the task might start up all of the others again, by spreading part of its work to them. Nevertheless, there are good algorithms for establishing when distributed termination has occurred. My idea was to use one of these algorithms for garbage collection. Imagine that each computing node, every time it touches a data structure, labels it "needed at time t", where t is the current time. These time-stamps then propagate along pointer chains, so that eventually every data structure which was accessible at time t is labelled with such a time-stamp. By using a distributed termination algorithm to detect that propagation of time-stamp t has terminated, we can tell when data with older time stamps can be recycled. I showed how to do this, in such a way that the flow of time stamps was accomplished by ordinary local garbage collections, and an adapted distributed termination algorithm detected termination for all time stamps at the same time.

Linking distributed termination to garbage collection was a fruitful idea, which many others have adopted since.

R.J.M. Hughes, The design and implementation of programming languages, Dphil Thesis, Oxford University Computing Laboratory, July 1983.
69 citations: 1 2 3 4
3.1 citations per year
R.J.M. Hughes, Super-combinators: A new implementation method for applicative languages, ACM Symposium on Lisp and Functional Programming, Pittsburgh, Pennsylvania, ACM, August 1982, pp. 1--10.
50 citations: 1 2 3 4 5
2.2 citations per year

My doctoral thesis, and my first published paper. In the early days of lazy functional programming, I was inspired by an implementation technique due to David Turner, which led to "self optimising" code, automatically lifting loop invariant expressions out of loops, for example. Turner technique was based on compiling to a fixed set of "combinators", tiny functions which were the "instructions" of his abstract machine. But compiling to such tiny functions involved a large overhead, so I showed that the same self-optimising properties could be obtained by compiling to much larger functions, the super-combinators of my paper title, tailored for each program being compiled. This was an idea whose time had come: I wasn't the only one to suggest it. Nevertheless, super-combinators are now used in today's Haskell compilers, and the name I coined for the self-optimising property, full laziness, has become a standard term in the field.

Apart from this, my doctoral thesis contained a second influential piece of work: I observed the phenomenon of space leaks caused by lazy evaluation, for the first time. Under lazy evaluation, a value may be stored in either evaluated or unevaluated form, and the sizes of these two can be dramatically different. For example, an expression computing the length of a very long list contains a pointer to the list, which will cause the entire list to be live data until the expression is evaluated--at which point, the list may be deleted by the garbage collector, and the call of length replaced by an integer. In this case, evaluating the length of the list earlier could save a great deal of memory. In other circumstances, a small unevaluated expression may generate a large structure when it is evaluated, in which case delaying evaluation as long as possible will save memory. I observed that effects like these could cause programs to use very much more memory than a programmer would expect--to "leak space". Space leaks bedevilled lazy functional programming for many years, until we learned how to build profiling tools to diagnose them, and programming methods to fix them. One popular method is the function "seq", which forces early evaluation: I suggested this in my thesis, and it is used in Haskell to this day. I also showed that seq alone was not enough to fix all space leaks, and proposed a form of parallel evaluation to fix some of the others. My proposed solution is not used today, but the problem I observed has been tackled by many others, culminating in Sparud's more-or-less complete solution.