Computational paradigms

Goal

This note reviews the main notions of computation and how they give rise to the corresponding notations to express these computations.

Imperative programming

Knox: “Forgive me for asking a crass and naive question –
but what is the point of devising a machine that cannot be built
in order to prove that there are certain mathematical statements
that cannot be proved?
Is there any practical value in all this?”

Turing: The possibilities are boundless.

Breaking the Code

In the Von Neumann architecture, both programs and data are stored in a memory. Every computational step is a command that maps a state of this memory to another state of this memory, imperatively. A computation is a series of commands. It may yield an error state, a final state, or it may diverge (or run for a longer time than we care to wait for). Notationally, the syntactic units are commands, these commands specify state changes, and the corresponding programming languages are said to be imperative.

In Week 01 (Chapter Background and general introduction), we saw how a program is the recipe a computer follows to process data. To quote:

A cooking recipe is a notation that conveys how to cook something. It specifies data (the ingredients, e.g., eggs, butter, salt, pepper, thyme), resources and tools (e.g., a stove and a pan), and an algorithm (a method to operate on the data, e.g., to beat the eggs towards cooking an omelet). To make a dish, a cook can then operate over the ingredients according to the algorithm.

In cookbooks, the algorithmic part of recipes is often stated in an imperative fashion: a series of commands issued to the cook. For example, here is a recipe for water pudding, due to the stand-up philosopher Pierre Dac, and translated from French:

  • Take a liter of ordinary water and carefully boil it.
  • When it is has boiled to your satisfaction, take a second liter of water and heat it up in a bain-marie.
  • That being done, add a liter of fresh water in the warm water in drops, to smooth it out.
  • Let it thicken slightly at low heat.
  • Meanwhile, whip up one and a half liters of water and incorporate it in your first preparation.
  • If your sauce feels a bit firm, thin it with some lightly warmed up water so that it doesn’t stick.
  • Put it in the oven at high temperature for 40 minutes.
  • Turn it out, and to make it clearer, pour it into a liter of water.

(The term “stand-up philosopher” is due to Mel Brooks.)

For a more computational example, imagine a program that works over two memory locations. The first one (call it x) is initialized with 5, and the second (call it a) with 1. The program repeatedly checks whether the content of x is 0, and stops if it is so, with a containing the result. Otherwise, the program multiplies the content of x by the content of a and stores the result at location a, it decrements the content of x, and it iterates. Let us follow the computation step by step:

  • Initially, the content of x is 5 and the content of a is 1.
  • Since the content of x is 5, it is not 0. We therefore multiply 5 by the content of a, which is 1. The result of this multiplication is 5, which we store at location a. We then decrement the content of x (i.e., we subtract 1 from the content of x and we store the result at location x), and we iterate.
  • So at this point of the computation, the content of x is 4 and the content of a is 5.
  • Since the content of x is 4, it is not 0. We therefore multiply 4 by the content of a, which is 5. The result of this multiplication is 20, which we store at location a. We then decrement the content of x, and we iterate.
  • So at this point of the computation, the content of x is 3 and the content of a is 20.
  • Since the content of x is 3, it is not 0. We therefore multiply 3 by the content of a, which is 20. The result of this multiplication is 60, which we store at location a. We then decrement the content of x, and we iterate.
  • So at this point of the computation, the content of x is 2 and the content of a is 60.
  • Since the content of x is 2, it is not 0. We therefore multiply 2 by the content of a, which is 60. The result of this multiplication is 120, which we store at location a. We then decrement the content of x, and we iterate.
  • So at this point of the computation, the content of x is 1 and the content of a is 120.
  • Since the content of x is 1, it is not 0. We therefore multiply 1 by the content of a, which is 120. The result of this multiplication is 120, which we store at location a. We then decrement the content of x, and we iterate.
  • So at this point of the computation, the content of x is 0 and thus we stop the computation with a containing 120, which is the result.

The result of this program is at location a: it is the factorial of 5, i.e., 5 * 4 * 3 * 2 * 1 * 1, i.e., 120. Semantically, the computation was a series of state changes over the memory, and syntactically, the program was a list of statements (a.k.a. “commands”) triggering each of these state changes.

(Later in the course (namely in Week 12), we shall see how to express this very imperative program in OCaml.)

Anton: This is not a pipe, right?
Loki (piping in): I wouldn’t know, I don’t smoke.
Magritte: I did.
Anton: So this was a pipe?
Mimer (hurriedly): Mr. Magritte, thanks for stopping by!

Note, in the description above, the distinction between a location and the content of the memory at that location. It is the same distinction as the one between one’s home and the people living there.

Functional programming

The world of functional programming is, in principle, stateless: one expresses how to map some input to some output with a combination of function calls. More about that in the next lecture notes.

Notationally, the syntactic units of functional programs are expressions. Expressions form the syntactic units of functional programs. Interpreting an expression, if that process terminates and does not raise any error, yields a value. For this reason, an interpreter for functional programs is often called an evaluator.

Logic programming

The world of logic programming is one of relations. Having specified relations between things, one then issues a query to check whether some things are related. This query can then be

  • invalidated: the things cannot be shown to be related,
  • validated: the things can be shown to be related in at least one way, or
  • it may diverge (or run for a longer time than we care to wait for).

For example, let us define a relation, Factorial, between two numbers, x and a so that the judgment (Factorial x a) is satisfied whenever a is the factorial of x. This judgment is defined with the following proof rules:

FACTORIAL_ZERO 
(Factorial 0 1)
FACTORIAL_SUCC(Factorial x a)where x' = x + 1 and a' = x' * a
(Factorial x' a')

In words:

  • The first rule is unconditional. It says that 0 and 1 are in the factorial relation.

  • The second rule is conditional. It says that whenever x and a are in the factorial relation, then

    • the result of evaluating x + 1 and
    • the result of evaluating (x + 1) * a

    are also in the factorial relation.

The programmer may then issue the query as to whether 5 and 120 are in the factorial relation, i.e., whether 120 is the factorial of 5:

(Factorial 5 120)

Using the two proof rules, the run-time system internally constructs the following proof tree, instantiating x, x', a and a' as it goes:

  • The goal (Factorial 5 120) is the conclusion of the conditional rule FACTORIAL_SUCC when x' denotes 5 and a' denotes 120. Then x denotes 4 and a denotes 24. So the proof tree begins as follows. It is an instance of FACTORIAL_SUCC where 4 replaces x, 24 replaces a, 5 replaces x', and 120 replaces a':

                   (Factorial 4 24)
    FACTORIAL_SUCC -----------------
                   (Factorial 5 120)
    

    In this depiction, (Factorial 4 24) is a new goal.

  • The goal (Factorial 4 24) is the conclusion of the conditional rule FACTORIAL_SUCC when x' denotes 4 and a' denotes 24. Then x denotes 3 and a denotes 6. So the proof tree continues as follows. This extension is an instance of FACTORIAL_SUCC where 3 replaces x, 6 replaces a, 4 replaces x', and 24 replaces a':

                   (Factorial 3 6)
    FACTORIAL_SUCC ----------------
                   (Factorial 4 24)
    FACTORIAL_SUCC -----------------
                   (Factorial 5 120)
    

    In this depiction, (Factorial 3 6) is a new goal.

  • The goal (Factorial 3 6) is the conclusion of the conditional rule FACTORIAL_SUCC when x' denotes 3 and a' denotes 6. Then x denotes 2 and a denotes 2. So the proof tree continues as follows. This extension is an instance of FACTORIAL_SUCC where 2 replaces x, 2 replaces a, 3 replaces x', and 6 replaces a':

                   (Factorial 2 2)
    FACTORIAL_SUCC ---------------
                   (Factorial 3 6)
    FACTORIAL_SUCC ----------------
                   (Factorial 4 24)
    FACTORIAL_SUCC -----------------
                   (Factorial 5 120)
    

    In this depiction, (Factorial 2 2) is a new goal.

  • The goal (Factorial 2 2) is the conclusion of the conditional rule FACTORIAL_SUCC when x' denotes 2 and a' denotes 2. Then x denotes 1 and a denotes 1. So the proof tree continues as follows. This extension is an instance of FACTORIAL_SUCC where 1 replaces x, 1 replaces a, 2 replaces x', and 2 replaces a':

                   (Factorial 1 1)
    FACTORIAL_SUCC ---------------
                   (Factorial 2 2)
    FACTORIAL_SUCC ---------------
                   (Factorial 3 6)
    FACTORIAL_SUCC ----------------
                   (Factorial 4 24)
    FACTORIAL_SUCC -----------------
                   (Factorial 5 120)
    

    In this depiction, (Factorial 1 1) is a new goal.

  • The goal (Factorial 1 1) is the conclusion of the conditional rule FACTORIAL_SUCC when x' denotes 0 and a' denotes 1. Then x denotes 0 and a denotes 1. So the proof tree continues as follows. This extension is an instance of FACTORIAL_SUCC where 0 replaces x, 1 replaces a, 1 replaces x', and 1 replaces a':

                   (Factorial 0 1)
    FACTORIAL_SUCC ---------------
                   (Factorial 1 1)
    FACTORIAL_SUCC ---------------
                   (Factorial 2 2)
    FACTORIAL_SUCC ---------------
                   (Factorial 3 6)
    FACTORIAL_SUCC ----------------
                   (Factorial 4 24)
    FACTORIAL_SUCC -----------------
                   (Factorial 5 120)
    

    In this depiction, (Factorial 0 1) is a new goal.

  • The goal (Factorial 0 1) is the conclusion of the unconditional rule FACTORIAL_ZERO. So the proof tree is completed as follows. This extension is an instance of FACTORIAL_ZERO:

    FACTORIAL_ZERO ---------------
                   (Factorial 0 1)
    FACTORIAL_SUCC ---------------
                   (Factorial 1 1)
    FACTORIAL_SUCC ---------------
                   (Factorial 2 2)
    FACTORIAL_SUCC ---------------
                   (Factorial 3 6)
    FACTORIAL_SUCC ----------------
                   (Factorial 4 24)
    FACTORIAL_SUCC -----------------
                   (Factorial 5 120)
    

    In this depiction, there is no new goal: the proof tree is complete.

Having successfully constructed this tree with the two rules as building blocks, the run-time system returns the positive answer that 5 and 120 are in the factorial relation.

Here are 3 other examples of queries.

  • Given a query as to whether 0 and 0 are in the factorial relation:

    (Factorial 0 0)
    

    the run-time system would fail to construct a proof tree, and would answer that the query is not satisfied.

  • Given a query as to whether there exists an a such that 5 and a are in the factorial relation:

    (Factorial 5 a)
    

    the run-time system would successfully construct a proof tree, and would answer that the query is satisfied whenever a denotes 120.

  • And given a query as to whether there exists an n such that n and 120 in the factorial relation:

    (Factorial n 120)
    

    the run-time system would successfully construct a proof tree, and would answer that the query is satisfied whenever n denotes 5.

Anton: So the exercises about I- and T-diagrams, in Week 01...
Dana: Yes, they were actually about logic programming.
Alfrothul: So each diagram is like a proof rule...
Anton: ...and putting them together is like constructing a proof tree.
Mimer: Yup. And if you can’t construct such a tree, like in Exercise 17, it means that there are no solutions.
Alfrothul: And if we can construct several proof trees?
Mimer: It means that there are several solutions, like in Exercise 11.
Halcyon: By the way, constructing these proof trees felt like playing with LEGO bricks.
Mimer: Playing is the right word, since the name LEGO is a contraction of “let’s play” in Danish.
Loki: Meet Mimer – always trying to teach us something.
Mimer: Well, learn now, know later.
Anton (whispering to Alfrothul): I think Mimer is referring to the computing proverb “Think first, program later”.
Alfrothul (whispering back): And to the carpentering proverb “Measure twice, cut once”.
Loki (interjecting): Well, he has a head for that.

Rewriting

In the world of rewriting, a program is a term that, after it is initially written, is rewritten and rewritten, based on given rewriting rules and according to some strategy, until it can no longer be rewritten, at which point it is the result of the computation. For example, multiplication is specified with a rewriting rule saying that whenever each of the terms x and y is a number (for example 5 and 11), the term x * y should be rewritten into the product of these numbers (here: 55). This particular rewriting rule is written as:

5 * 11 -> 55

Here are the rewriting rules for the factorial program:

0! -> 1
n! -> n * n'!    -- when n = n' + 1

So for example a term such as 5! is then rewritten into 5 * 4! (using the second rewriting rule of the factorial program, where n is 5 and n' is 4), which is rewritten into 5 * (4 * 3!) (using the second rewriting rule for factorial, where n is 4 and n' is 3), which is rewritten into 5 * (4 * (3 * 2!)), which is rewritten into 5 * (4 * (3 * (2 * 1!))), which is rewritten into 5 * (4 * (3 * (2 * (1 * 0!)))), which is rewritten into 5 * (4 * (3 * (2 * (1 * 1)))) (using the first rewriting rule of the factorial program), which is rewritten into 5 * (4 * (3 * (2 * 1))) (using the rewriting rule for multiplication), which is rewritten into 5 * (4 * (3 * 2)) (using the rewriting rule for multiplication), which is rewritten into 5 * (4 * 6), which is rewritten into 5 * 24, which is rewritten into 120, which is the result.

Rewriting is an active topic of research and has obvious applications, e.g., for the theory and practice of macros.

Other computational paradigms

Purely imperative programming, purely functional programming, purely logic programming and rewriting are, in some sense, extreme computational paradigms. Common programming practice is more mixed: for example, one can both express pure functions and write macros in C, to say nothing of object-oriented programming. (Later in the course, we shall see how to represent objects in OCaml.)

Version

Polished the proof trees in the section about logic programming [20 Jan 2023]

Created [10 Jan 2023]