This lecture note illustrates the computational cost of layering interpreters on top of each other, using a self-interpreter.
The goal of this exercise is to measure the cost of a tower of interpreters, i.e., of interpreters interpreting each other.
To this end, Brynja used a self-interpreter written in Scheme, using a Scheme interpreter, Petite Chez Scheme. Interacting with Petite Chez Scheme is carried out through a read-eval-print loop, where Petite Chez Scheme reads what the user types, evaluates what the user has typed, and (assuming no error has been raised and evaluation terminates) prints the result of evaluation.
The self-interpreter is loaded by typing:
(load "self-interpreter.scm")
and started by typing:
(start-the-interpreter "Say what? ")
where "Say what? " is the desired prompt that invites the user to type something to evaluate. Once started, the self-interpreter displays There we go again. and enters its own read-eval-print loop. It is stopped by typing (exit).
Here is what happens below. Brynja repeatedly adds 1 and 2, using the time special form to measure the resources (time and space) spent performing this addition.
Concretely:
Petite Chez Scheme Version 8.4
Copyright (c) 1985-2011 Cadence Research Systems
> (time (+ 1 2))
(time (+ 1 ...))
no collections
0 ms elapsed cpu time
0 ms elapsed real time
0 bytes allocated
3
> (load "self-interpreter.scm")
> (start-the-interpreter ">> ")
There we go again.
>> (time (+ 1 2))
(time (_eval (car es) ...))
no collections
0 ms elapsed cpu time
0 ms elapsed real time
304 bytes allocated
3
>> (load "self-interpreter.scm")
>> (start-the-interpreter ">>> ")
There we go again.
>>> (time (+ 1 2))
(time (_eval (car es) ...))
no collections
5 ms elapsed cpu time
5 ms elapsed real time
154128 bytes allocated
3
>>> (load "self-interpreter.scm")
>>> (start-the-interpreter ">>>> ")
There we go again.
>>>> (time (+ 1 2))
(time (_eval (car es) ...))
22 collections
2843 ms elapsed cpu time, including 3 ms collecting
2844 ms elapsed real time, including 4 ms collecting
91117200 bytes allocated, including 92718448 bytes reclaimed
3
>>>> (load "self-interpreter.scm")
>>>> (start-the-interpreter ">>>>> ")
There we go again.
>>>>> (time (+ 1 2))
(time (_eval (car es) ...))
12118 collections
1616068 ms elapsed cpu time, including 1550 ms collecting
1625359 ms elapsed real time, including 1519 ms collecting
51071280472 bytes allocated, including 51071992424 bytes reclaimed
3
>>>>> (exit)
"So long."
>>>> (exit)
"So long."
>>> (exit)
"So long."
>> (exit)
"So long."
> (exit)
Process Petite finished
The number of > in the prompt indicates the height of the tower of self-interpreters:
That said, one can use a natural number instead:
Petite Chez Scheme Version 8.4
Copyright (c) 1985-2011 Cadence Research Systems
> (load "self-interpreter.scm")
> (start-the-interpreter "1> ")
There we go again.
1> (load "self-interpreter.scm")
1> (start-the-interpreter "2> ")
There we go again.
2> (load "self-interpreter.scm")
2> (start-the-interpreter "3> ")
There we go again.
3> (load "self-interpreter.scm")
3> (start-the-interpreter "4> ")
There we go again.
4> (exit)
"So long."
3> (exit)
"So long."
2> (exit)
"So long."
1> (exit)
"So long."
> (exit)
Process Petite finished
And now for the exercise proper:
Using your favorite plotting software, draw two curves using the numbers above:
^
time |
|
|
|
|
|
|
| height of the tower
+----+----+----+----+------------------------>
0 1 2 3 4 of self-interpreters
^
space |
|
|
|
|
|
|
| height of the tower
+----+----+----+----+------------------------>
0 1 2 3 4 of self-interpreters
Practical hint: the numbers go up rather quickly, so use a logarithmic scale for time and space.
Characterize the slowdown in time and in space: is it constant? logarithmic? linear? quadratic? polynomial? exponential?
Based on your characterization, can you predict how much time and how much space it would take to add one more layer of interpretation to the tower?
Express, in your own words,
(Should you need a hint, here is the beginning of a train of thoughts: Choo choo.)
Created [21 Jan 2019]