Midterm project
The goal of this midterm project is to give you a chance to reflect on
what you have been exposed to in this course since the beginning of the
semester, conceptually, practically, and methodologically.
Expectations
Vigfus: To solve and to program.
Halcyon: This is so cool!
You are expected:
- to work in groups, as you did for the weekly handins, to solve and to program,
- to write an individual report from scratch and using your own words,
and
- to upload individually and anonymously your .ml files and your
.pdf file on Canvas.
The project consists in mandatory tasks and optional tasks.
If the group carries out the mandatory tasks and one of the members of the
group independently carries out an optional task, this optional task
should be clearly signaled in the corresponding extended .ml file and in the .pdf file.
The individual report
Your report should include
- an anonymous front page with title and date,
- a second page with a table of contents, and
- from the third page and onward,
- an introduction,
- a series of sections and subsections reflecting the structure of the
project, and
- a conclusion where you assess what you did and reflect on how you did
it and the perspective your assessment and reflection provide.
Pages should be numbered, and the narrative should be spell checked.
An inspiring (and not necessarily humorous, just on topic) quote (e.g.,
from Dijkstra) or three would be welcome.
Throughout, remember to embrace the structure where the computation is
described informally (textually), where it is accounted for with a
unit-test function, where it is specified inductively, where this
inductive specification is mirrored into a structurally recursive
function, and where the implementation is verified to pass the unit
tests – unit tests whose significance (e.g., code coverage)
and limitations (e.g., fake functions) should be scrutinized.
About describing your programs: paraphrasing them is not explaining them.
To this end, do as in the lecture notes and describe what your program
does before describing how it achieves that. The midterm project will be
graded based on Boileau’s tenet from Week 01 – what one understands well, one does explain
clearly – as well as on its counterpoint – what isn’t explained clearly
isn’t understood. Concretely,
- if you found the answer to a question,
you are in position to explain it and you will get credit for it;
- if someone else in your group found the answer to a question
and you can explain it clearly, this explanation demonstrates understanding
and you will get credit for it; and
- if someone else in your group found the answer to a question
and you cannot explain it in your report,
e.g., because you only paraphrase the program,
you will get no credit for it.
Anonymity
The .ml files and the .pdf file will be perused by a grader, and
therefore they must be anonymous.
However, each group should be identified by a name (be it generic, e.g.,
“Dragon Army” or “Knights of the lambda-calculus”, or specific, e.g.,
“Thanks for stopping by”, “Halcyon rules”, or some imaginative such).
So for a slightly more serious example, the files should be identified by
the name of a group, a nom de plume,
and an enumeration:
- group: Kingfishers in Asia for the world
- author: Foo the Barbar
- extras: ...
where the extras list what Foo the Barbar has added to the joint work of the Kingfishers in Asia for the world.
The grader’s reading grid
The .ml files, the form:
- are they anonymous?
- do they load without hiccup?
- does loading them only emits signatures, i.e., no traces and no intermediate results
(that is to say: does each .ml file only contain global declarations, i.e., let
... = ...;;)?
- are the .ml files digitally signed, and if so does their digital signature match what OCaml emits?
- are they indented in the standard way?
- do they contain comments, and if so of which nature (indicative /
analytical)?
- are the tasks singled out, e.g., with let task_1 = "mandatory";;?
The .ml files, the content:
- are there unit-test functions before all the implementations?
- are the unit-test functions the same as the ones in the given resource file, or are they expanded?
- are all the implementations tested?
- are all mandatory tasks carried out?
- are some optional tasks carried out, and if so which ones?
- are there any extras?
- is there at least one fake function that defeats some unit tests in the project?
The .pdf file, the form:
- is it anonymous?
- is it original, or does it share narratives with the other reports in the group?
- front page: incomplete / standard / clever
- page numbers: yes/no
- table of contents: present or absent / uninformative / standard / clever and telling
- introduction: minimalistic / telling / amazing
- each section:
- with an introduction and a conclusion / without
- with just code / with a narrative interspersed with code
- with everything, torrentially / with brevity, measure, or even concision
- with no narrative / an ordinary narrative / an outstanding narrative
- with no quotes / with fitting quotes / with ill-fitting quotes
- conclusion: minimalistic / telling / amazing
- remarkably short / medium size / enormous size
- incomplete / OK / with pluses (be them the optional tasks or any extra)
- was the report spell-checked?
- was Dijkstra quoted in relation to fake functions enabled by incomplete testing?
- was the Practical OCaml Programmer quoted in relation to complete testing?
More functional abstraction
- Task:
implementing a maker of boolean functions
A miscellany of recursive programs from Week 06
Resource:
The underlying determinism of OCaml
- Question 02:
in which order are a function and an argument evaluated in an application?
- Question 03:
in which order were the components of a tuple evaluated when this tuple is constructed?
- Question 04:
applying a curried function vs. applying an uncurried function
- Question 05:
function applications vs. let expressions
- Question 06:
in which order are the definienses evaluated in a let expression?
- Question 07:
function applications vs. let expressions, continued
- Question 08:
in which order are conjuncts evaluated in a Boolean conjunction?
- Question 09:
determining the validity of a handful of simplifications
- Question 10:
determining the equivalence of let expressions involving impure expressions
- Question 11:
determining the equivalence of let expressions involving pure expressions
- Question 12:
strict vs. non-strict functions
Palindromes, string concatenation, and string reversal
- Question 01:
implementing string concatenation using String.init
- Question 03:
implementing String.map with a simulated for-loop
- Question 04 – optional:
implementing String.mapi with a simulated for-loop
- Question 05:
implementing string reversal
- Question 06:
relating string concatenation and string reversal
- Question 07:
implementing a palindrome generator
- Question 08:
implementing palindrome detectors
- Question 09 – not optional:
implementing the reversal of a palindrome
- Question 10 – optional:
defining String.map using String.mapi
- Question 11 – optional:
defining String.mapi using String.init
- Question 12:
implementing string_andmap and string_ormap
- Question 13 – optional:
implementing string_concatmap and show_string
Cracking polynomial functions
Version
Expanded the narrative and added index entries
[29 Mar 2022]
Added the forgotten Exercise 03
[02 Mar 2022]
Created
[27 Feb 2022]