Computational paradigms

Goal

This note reviews the main notions of computation and how they give rise to the corresponding notations to express these computations.

Imperative programming

Knox: “Forgive me for asking a crass and naive question –
but what is the point of devising a machine that cannot be built
in order to prove that there are certain mathematical statements
that cannot be proved?
Is there any practical value in all this?”

Turing: The possibilities are boundless.

Breaking the Code

In the Von Neumann architecture, both programs and data are stored in a memory. Every computational step is a command that maps a state of this memory to another state of this memory, imperatively. A computation is a series of commands. It may yield an error state, a final state, or it may diverge (or run for a longer time than we care to wait for). Notationally, the syntactic units are commands, these commands specify state changes, and the corresponding programming languages are said to be imperative.

For example, imagine a program that works over two memory locations. The first one (call it x) is initialized with 5, and the second (call it a) with 1. The program repeatedly checks whether the content of x is 0, and stops if it is so. Otherwise, it multiplies the content of x by the content of a and stores the result at location a, it decrements the content of x, and it iterates. Let us follow the computation step by step:

  • Initially, the content of x is 5 and the content of a is 1.
  • Since the content of x is 5, it is not 0. We therefore multiply 5 by the content of a, which is 1. The result of this multiplication is 5, which we store at location a. We then decrement the content of x (i.e., we subtract 1 from the content of x and we store the result at location x), and we iterate.
  • So at this point of the computation, the content of x is 4 and the content of a is 5.
  • Since the content of x is 4, it is not 0. We therefore multiply 4 by the content of a, which is 5. The result of this multiplication is 20, which we store at location a. We then decrement the content of x, and we iterate.
  • So at this point of the computation, the content of x is 3 and the content of a is 20.
  • Since the content of x is 3, it is not 0. We therefore multiply 3 by the content of a, which is 20. The result of this multiplication is 60, which we store at location a. We then decrement the content of x, and we iterate.
  • So at this point of the computation, the content of x is 2 and the content of a is 60.
  • Since the content of x is 2, it is not 0. We therefore multiply 2 by the content of a, which is 60. The result of this multiplication is 120, which we store at location a. We then decrement the content of x, and we iterate.
  • So at this point of the computation, the content of x is 1 and the content of a is 120.
  • Since the content of x is 1, it is not 0. We therefore multiply 1 by the content of a, which is 120. The result of this multiplication is 120, which we store at location a. We then decrement the content of x, and we iterate.
  • So at this point of the computation, the content of x is 0 and thus we stop the computation.

The result of this program is at location a: it is the factorial of 5, i.e., 5 * 4 * 3 * 2 * 1 * 1, i.e., 120. Semantically, the computation was a series of state changes over the memory, and syntactically, the program was a list of statements (a.k.a. “commands”) triggering each of these state changes.

Harald: This is not a pipe, right?
Loki (piping in): I wouldn’t know, I don’t smoke.

Note, in the description above, the distinction between a location and the content of the memory at that location. It is the same distinction as the one between your home and the people living there.

Functional programming

The world of functional programming is, in principle, stateless: one expresses how to map some input to some output with a combination of function calls. More about that in the next lecture notes.

Notationally, the syntactic units of functional programs are expressions. Expressions form the syntactic units of functional programs. Interpreting an expression, if that process terminates and does not raise any error, yields a value. For this reason, an interpreter for functional programs is often called an evaluator.

Logic programming

The world of logic programming is one of relations. Having specified relations between things, one then issues a query to check whether some things are related. This query can then be

  • invalidated: the things cannot be shown to be related,
  • validated: the things can be shown to be related in at least one way, or
  • it may diverge (or run for a longer time than we care to wait for).

For example, let us define a relation, Factorial, between two numbers, x and a so that the judgment (Factorial x a) is satisfied whenever a is the factorial of x. This judgment is defined with the following proof rules:

FACTORIAL_ZERO 
(Factorial 0 1)
FACTORIAL_SUCC(Factorial x a)where x' = x + 1 and a' = x' * a
(Factorial x' a')

In words:

  • The first rule is unconditional. It says that 0 and 1 are in the factorial relation.
  • The second rule is conditional. It says that whenever x and a are in the factorial relation, then [the result of evaluating] x + 1 and [the result of evaluating] (x + 1) * a are also in the factorial relation.

The programmer may then issue the query as to whether 5 and 120 are in the factorial relation, i.e., whether 120 is the factorial of 5:

(Factorial 5 120)

Using the two proof rules, the run-time system internally constructs the following proof tree, instantiating x, x', a and a' as it goes:

FACTORIAL_ZERO ---------------
               (Factorial 0 1)
FACTORIAL_SUCC ---------------
               (Factorial 1 1)
FACTORIAL_SUCC ---------------
               (Factorial 2 2)
FACTORIAL_SUCC ---------------
               (Factorial 3 6)
FACTORIAL_SUCC ---------------
               (Factorial 4 24)
FACTORIAL_SUCC -----------------
               (Factorial 5 120)

Having successfully constructed this tree with the two rules as building blocks, the run-time system returns the positive answer that 5 and 120 are in the factorial relation.

Here are 3 other examples of queries.

  • Given a query as to whether 0 and 0 are in the factorial relation:

    (Factorial 0 0)
    

    the run-time system would fail to construct a proof tree, and would answer that the query is not satisfied.

  • Given a query as to whether there exists an a such that 5 and a are in the factorial relation:

    (Factorial 5 a)
    

    the run-time system would successfully construct a proof tree, and would answer that the query is satisfied whenever a is 120.

  • And given a query as to whether there exists an n such that n and 120 in the factorial relation:

    (Factorial n 120)
    

    the run-time system would successfully construct a proof tree, and would answer that the query is satisfied whenever n is 5.

Harald: So the exercises about I- and T-diagrams, last week...
Brynja: Yes, they were actually about logic programming.
Alfrothul: So each diagram is like an inference rule...
Harald: ...and putting them together is like constructing a proof tree.
Mimer: Yup. And if you can’t construct such a tree, like in Exercise 7, it means that there are no solutions.
Alfrothul: And if we can construct several proof trees?
Mimer: It means that there are several solutions, like in Exercise 1.
Halcyon: By the way, constructing these proof trees felt like playing with LEGO bricks.
Mimer: Playing is the right word, since the name LEGO is a contraction of “let’s play” in Danish.
Loki: Meet Mimer – always trying to teach us something.
Mimer: Well, learn now, know later.
Harald (whispering to Alfrothul): I think Mimer is referring to the computing proverb “Think first, program later”.
Alfrothul (whispering back): And to the carpenting proverb “Measure twice, cut once”.
Loki (interjecting): Well, he has a head for that.

Rewriting

In the world of rewriting, a program is a term that, after it is initially written, is rewritten and rewritten, based on given rewriting rules and according to some strategy, until it can no longer be rewritten, at which point it is the result of the computation. For example, multiplication is specified with a rewriting rule saying that whenever each of the terms x and y is a number (for example 5 and 11), the term x * y should be rewritten into the product of these numbers (here: 55). This particular rewriting rule is written as:

5 * 11 -> 55

Here are the rewriting rules for the factorial program:

0! -> 1
n! -> n * n'!    -- when n = n' + 1

So for example a term such as 5! is then rewritten into 5 * 4! (using the second rewriting rule of the factorial program, where n is 5 and n' is 4), which is rewritten into 5 * (4 * 3!) (using the second rewriting rule for factorial, where n is 4 and n' is 3), which is rewritten into 5 * (4 * (3 * 2!)), which is rewritten into 5 * (4 * (3 * (2 * 1!))), which is rewritten into 5 * (4 * (3 * (2 * (1 * 0!)))), which is rewritten into 5 * (4 * (3 * (2 * (1 * 1)))) (using the first rewriting rule of the factorial program), which is rewritten into 5 * (4 * (3 * (2 * 1))) (using the rewriting rule for multiplication), which is rewritten into 5 * (4 * (3 * 2)) (using the rewriting rule for multiplication), which is rewritten into 5 * (4 * 6), which is rewritten into 5 * 24, which is rewritten into 120, which is the result.

Rewriting is an active topic of research and has obvious applications, e.g., for the theory and practice of macros.

Other computational paradigms

Purely imperative programming, purely functional programming, purely logic programming and rewriting are, in some sense, extreme computational paradigms. Common programming practice is more mixed: for example, one can both express pure functions and write macros in C, to say nothing of object-oriented programming. (Later in the course, we shall see how to represent objects in OCaml.)

Version

Created [21 Jan 2019]