Standard ML language

From Scholarpedia
Mads Tofte (2009), Scholarpedia, 4(2):7515. doi:10.4249/scholarpedia.7515 revision #152339 [link to/cite this article]
Jump to: navigation, search
Post-publication activity

Curator: Mads Tofte

The programming language Standard ML, also known as SML, is inspired by certain fundamental concepts of Computer Science, making them directly available to the programmer. In particular:

  • Trees and other recursive datatypes may be declared and used without mention of pointers;
  • Functions are values in Standard ML; functions can take functions as arguments and return functions as results;
  • Type polymorphism (Milner, 1978) makes it possible to declare functions whose type depends on the context. For example, the same function may be used for reversing a list of integers and a list of strings.

Standard ML emphasises safety in programming. Some aspects of safety are ensured by the soundness of the Standard ML type discipline. Other aspects of safety are ensured by the fact that Standard ML implementations use automatic memory management, which precludes, for example, premature de-allocation of memory.

Standard ML has references, arrays, input-output, a modules system, libraries and highly sophisticated compilers, all in order to give it the convenience and efficiency required for large-scale applications.

Contents

Overview of Standard ML

Standard ML is a declarative language; a Standard ML program consists of a sequence of declarations of types, values (including functions) and exceptions, possibly embedded in modules. A basic module is called a structure in Standard ML, module interfaces are called signatures, and parameterised modules are called functors.

Standard ML is a statically typed language. The execution of a Standard ML program is factored into elaboration at compile-time and evaluation at run-time. Elaboration is concerned with, among other things, the declaration of types, the checking of type rules and the matching of structures against signatures. Evaluation is concerned with input-output, computation of values, binding of values to identifiers and the raising and handling of exceptions.

Here is an example of a complete Standard ML program:

 signature TREE=
 sig
   datatype 'a tree = Lf | Node of 'a * 'a tree * 'a tree
   val size: 'a tree -> int
 end
 
 structure T:> TREE =
 struct
   datatype 'a tree = Lf | Node of 'a * 'a tree * 'a tree
   fun size Lf = 0
     | size(Node(_,t1, t2)) = 1 + size t1 + size t2
 end
 
 val n = let val t = T.Node(7, T.Node(1, T.Lf, T.Lf), 
                               T.Node(9, T.Lf, T.Lf))
         in T.size t
         end;

The signature TREE specifies a recursive datatype 'a tree of binary trees with node values of type 'a. Variables that start with a prime (') are type variables, i.e., range over types. An 'a tree is either a leaf Lf or it takes the form Node\((v, t_1, t_2)\ ,\) where \(v\) is a value of type 'a and the left and right sub-trees \(t_1\) and \(t_2\) both are of type 'a tree. Node is an example of a value constructor. Next, signature TREE specifies a function, size; the specification requires size to be applicable to all binary trees, irrespective of the type of the node values.

The declaration of the structure T starts by a signature constraint, T:> TREE. This constraint indicates that what comes next is an implementation of the signature TREE. The structure is delimited by the keywords struct and end. The structure first declares the datatype that was already specified in the signature. It then declares a function, size, which, when applied to a binary tree, counts the number of nodes of the tree. Note that the vertical bar (|) separates the cases of value construction in the datatype, and also separates the cases of value analysis in a function declaration. Further, note that size is a recursive function, i.e., it calls itself. Written out in full, the type of function size is \(\forall\)'a. 'a tree\(\to\)int. As a consequence, function size may be applied in one context to an int tree, say, and in another context to a string tree.

Finally, the program contains a value declaration (val n = \(\ldots\)) which uses the components of the structure T via long identifiers, e.g., T.Node, T.Lf and T.size. The (run-time) evaluation of the expression T.Node(7, T.Node(1, T.Lf, T.Lf), T.Node(9, T.Lf, T.Lf)) results in a tree value, \(t\ ,\) of type int tree, which we may draw thus, omitting the leaves:

                    7
                  /  \
                 1    9

The evaluation of T.size t subsequently traverses \(t\ .\) The net outcome of the declaration of n is that the value identifier n is bound to the integer value 3. The memory which was temporarily allocated for holding the binary tree \(t\) is automatically freed.

What is Standard ML used for?

Teaching

Standard ML is or has been taught to undergraduate and graduate Computer Science students at University of Edinburgh, University of Cambridge, Carnegie Mellon University, University of Princeton, and University of Copenhagen, among others.

Research

Standard ML is used as a tool in research on theorem proving, compiler technology and program analysis. For example, the HOL theorem prover from Cambridge University is written in Standard ML.

Other uses

The IT University of Copenhagen has around 100,000 lines of SML in web-based self-service systems for students and staff, including the personnel roster, a course evaluation system and work-flow systems for student project administration.

Language definition

The Definition of Standard ML by Milner, Tofte, Harper and MacQueen (1997) defines the syntax and semantics of Standard ML using operational semantics. In operational semantics, the meaning of language constructs is defined in terms of inference rules. In the case of Standard ML, there is one set of inference rules describing the static semantics of the language, and a separate set of inference rules describing the dynamic semantics of the language. The inference rules of the static semantics are called elaboration rules. Here is an example of an elaboration rule: \[ \frac{C\vdash {\it exp}\Rightarrow\tau'\to\tau\qquad C\vdash {\it atexp} \Rightarrow\tau'}{C\vdash {\it exp}\;{\it atexp} \Rightarrow \tau} \]

This rule is read as follows: if expression \({\it exp}\) elaborates to function type \(\tau'\to\tau\) in the context \(C\) and atomic expression \({\it atexp}\) elaborates to type \(\tau'\) in \(C\) then the function application expression \({\it exp}\;{\it atexp}\) elaborates to type \(\tau\) in \(C\ .\)

The inference rules of the dynamic semantics are called evaluation rules. Here is an example of an evaluation rule, the evaluation rule for local declarations: \[ \frac{E\vdash {\it dec}\Rightarrow E'\qquad E+E'\vdash {\it exp} \Rightarrow v}{E\vdash \mathtt{let}\;{\it dec}\;\mathtt{in}\;{\it exp}\;\mathtt{end} \Rightarrow v} \]

Here \(E\) and \(E'\) are environments, which bind values to identifiers. The rule is read thus: given environment \(E\ ,\) if the declaration \({\it dec}\) evaluates to environment \(E'\) and if in environment \(E+E'\) the expression \({\it exp}\) evaluates to value \(v\ ,\) then the expression \(\mathtt{let}\;{\it dec}\;\mathtt{in}\;{\it exp}\;\mathtt{end}\) evaluates to \(v\ .\) Here \(E+E'\) is the environment which maps an identifier \({\it id}\) to \(E'({\it id})\ ,\) if \({\it id}\) is in the domain of \(E'\ ,\) and to \(E(id)\ ,\) otherwise. So the \(+\) operator on environments reflects a scope rule of the language, namely that a local declaration of an identifier overrides all previous declarations of that identifier.

Compiler technology

Tens of man-years of research and development have gone into developing mature compilation technology for Standard ML. The resulting compilers include Standard ML of New Jersey, Moscow ML, MLWorks, SML.NET, SML Server and the ML Kit with Regions.

Type checking

A Standard ML compiler contains a type checker, which checks whether the source program can be elaborated using the elaboration rules. The static semantics only says which inferences are legal, it is not an algorithm for deciding whether a given source program complies with the inference rules. The type checker is such an algorithm, however. It employs type unification to infer types from the source program (Damas and Milner, 1982).

Continuing our previous example, consider the subexpression

 Node(1,Lf, Lf)

where, for brevity, we have shortened T.Node to Node etc. The type checker infers this type for the operator:

 Node:    'a * 'a tree * 'a tree  -> 'a tree

and this type for the argument triple:

 (1, Lf, Lf):  int * 'b tree * 'c tree

Unification of the two types 'a * 'a tree * 'a tree and int * 'b tree * 'c tree results in the substitution which maps 'a, 'b and 'c to int, so the type checker concludes that Node(1,Lf, Lf) has type int tree.

The type checker always terminates, either having inferred that the program complies with the static semantics, or producing an type error.

Code generation

A Standard ML compiler also generates code which, when executed, will give the result prescribed by the dynamic semantics of the language definition. Some compilers generate byte code, others native code. Most Standard ML compilers perform extensive program analysis and program transformations in order to achieve performance that can compete with what is obtained in languages like C.

All Standard ML compilers can compile source programs into stand-alone programs. The compiled programs can be invoked from a command line or as a web-service.

Run-time memory management

All Standard ML implementations provide for automatic re-cycling of memory. Standard ML of New Jersey and Moscow ML use generational garbage collection. The ML Kit with Regions and SML Server instead employ a static analysis, called region inference (Tofte and Talpin, 1994), which predicts allocation at compile-time and inserts explicit de-allocation of memory at safe points in the generated code.

Separate compilation, libraries and tools

All Standard ML Compilers allow for separate compilation, to deal with large programs.

The Standard ML Basis Library (Gansner and Reppy, 2004) consists of a comprehensive collection of Standard ML modules. Some of these give the programmer access to efficient implementations of text-book data structures and algorithms. Other modules provide support for advanced input-output. Still others give access to the operating system level, so that one can do systems programming in Standard ML.

Tools include generators of lexical analysers and parsers.

History of Standard ML

"ML" stands for meta language. ML, the predecessor of Standard ML, was devised by Robin Milner and his co-workers at Edinburgh University in the 1970’s, as part of their theorem prover LCF (Milner, Gordon and Wadsworth, 1979). Other early influences were the applicative languages already in use in Artificial Intelligence, principally LISP, ISWIM, POP2 and HOPE. During the 1980’s and first half of the 1990’s, ML inspired much programming language research internationally. MacQueen, extending ideas from HOPE and CLEAR, proposed the Standard ML modules system (MacQueen, 1984). Other major advances were mature compilers (Appel and MacQueen, 1991), a library (Gansner and Reppy, 2004), type-theoretic insight (Harper and Lillibridge, 1994) and a formal definition of the language (Milner, Tofte, Harper and MacQueen, 1997). Further information on the history of Standard ML may be found in the language definition (Milner et al., 1997).

Related languages

Standard ML is a member of a family of programming languages that originate from ML. Other members of that family are CAML, Caml Light and OCaml from INRIA, France, as well as F# from Microsoft Research UK. Close cousins are the purely functional programming languages, such as Miranda and Haskell.

References

  • Milner, Robin (1978). A theory of type polymorphism in programming. Journal of Computer and Systems Sciences. 17(3): 348-375.
  • Milner, Robin; Gordon, M and Wadsworth, C (1979). Edinburgh LCF; a Mechanized Logic of Computation. Lecture Notes in Computer Science, Volume 78. Springer-Verlag, Berlin.
  • Damas(1982). Principal type schemes for functional programs. Proceedings of the 8th Annual ACM Symposium on Principles of Programming Languages.  : 207-212.
  • MacQueen, D B (1984). Modules for Standard ML. Conference Record of the 1984 ACM Symposium on LISP and Functional Programming Languages.  : 198-207.
  • MacQueen(1991). Standard ML of New Jersey. In Proceedings of Programming Language Implementation and Logic Programming Lecture Notes in Computer Science. 528: 1-26. doi:10.1007/3-540-54444-5_83.
  • Harper(1994). A type-theoretic approach to higher-order modules with sharing. In Conference Record of POPL 1994: 21st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages.  : 123-137.
  • Tofte(1994). Implementation of the Typed Call-by-Value \(\lambda\)-calculus using a Stack of Regions. In Conference Record of POPL 1994: 21st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages.  : 188-201.
  • Milner, Robin and Tofte, M (1997). The Definition of Standard ML (Revised). MIT Press, Cambridge.
  • Gansner, E R and Reppy, J H (2004). The Standard ML Basis Library. Cambridge University Press, Cambridge.

Further reading

For tutorials, books and research papers on Standard ML, please refer to www.smlnj.org.

External links

The following Standard ML systems are available:

See also

C (programming language), CAML (programming language), Objective CAML (programming language), Haskell (programming language)

Personal tools
Namespaces

Variants
Actions
Navigation
Focal areas
Activity
Tools