Chapter 1, Building Abstractions with Procedures
Computational Processes are abstract beings that inhabit computers. These processes manipulate other abstract beings - called data. The program is a pattern of rules which instructs/directs these processes.
A programming Language is more than just a means for instructing computers to perform tasks. The language also seves as a means in which we organize our ideas about processes. Thus language provides us means to combine simple ideas into complex ideas. To accomplish this, a language provides these features:
- Primitive expressions, the simplese entities of the language. Eg: Numbers and arithmetic operations.
- means of combination, to build combined elements from simpler ones. Eg: Nesting of operations
(+ (- 2 3) 5))
. here+
is primitive procedure and the syntax is allowing nesting of primitive expressions. - means of abstraction, by which compound elements can be named and manipulated as units. Eg: defining/associating names with values or procedures like
(define x 3)
or(define (square) (* x x))
.
Procedure Application:
There are two ways:
- Substitution Model - By replacing the procedure invocation by the actual body and replacing all the formal arguments by actual arguments.
- Method call
There are two ways for evaluating the method arguments:
- Fully expand and then reduce - normal order evaluation. I think this is same as lazy evaluation.
- Evaluate the arguments and then apply - applicative order evaluation.
Conditional Expressions and predicates:
- case:
1
2
3
4
5
6
(cond
( (predicate-1) (expression-1) )
( (predicate-1) (expression1) )
…
( (predicate1) (expressionn) )
)
- if:
1
2
3
4
(if (predicate)
(consequence)
(alternative)
)
Note that apart from primitive predicates <,>,-
, we also have compound predicates: (and exp-1, exp-2,... exp-n)
, (or exp-1,...exp-n)
, or (not exp)
.
Procedures
Procedures are decomposed analogously to the way we decompose our problem. Its not just simply to divide the procedure into parts but - each procedure is decomposed into identifiable tasks which can be used as modules in other procedures. A procedure is in an abstraction for the task it does and the procedures which uses it do not need to know how that task is accomplished.
Bound and Free Variables
The variable names used in the procedures are called bound variables and renaming of these variables does not change the definition of the procedure. The free variables are the ones used in procedure which comes from environment/outer scope. We can not change them without changing the definition. Note that this definition of bound and free variable is analogous to the way we use variables while defining sets in mathematics.
Block Structure
There are situations where a procedure is used by only one procudure and is not needed in any other part of the program. In such case, it is better to nest the procedure inside the procedure in which it is used. Apart from having the advantage of hiding this procedure from other parts of program, this nesting also helps by providing access to the variables in the outer procedure to the inner procedure. Or the variables available in outer procedure become free variables in the inner procedure. This is also called lexical scoping.
Procedures and the Processes they generate
There are different types of processes generated by a procedure. Consider the following two procedures and the corresponding processes:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
; procedure
(define (+ a b)
(if (= a 0)
b
(inc (+ (dec a) b))))
; process generated
(+ 4 5)
(inc (+ 3 5))
(inc (inc (+ 2 5)))
(inc (inc (inc (+ 1 5))))
(inc (inc (inc (inc (+ 0 5)))))
(inc (inc (inc (inc 5))))
(inc (inc (inc 6)))
(inc (inc 7))
(inc 8)
9
1
2
3
4
5
6
7
8
9
10
11
12
13
; procedure
(define (+ a b)
(if (= a 0)
b
(+ (dec a) (inc b))))
; process generated
(+ 4 5)
(+ 3 6)
(+ 2 7)
(+ 1 8)
(+ 0 9)
9
Consider the first process. The substitution model reveals a shape of expansion followed by contraction. The expansion occurs as the process builds up a chain of deferred operations. The contraction occurs as the operations are actually performed. This type of process, characterized by a chain of deferred operations, is called a recursive process. Carrying out this process requires that the interpreter keep track of the operations to be performed later on. The length of the chain of deferred operations, and hence the amount of information needed to keep track of it, grows linearly with “a”(or “b”). Such a process is called a linear recursive process.
By contrast, the second process does not grow and shrink. At each step, we need to keep track of only the fixed number of values. We call this an iterative process. In general, an iterative process is one whose state can be summarized by a fixed number of state variables, together with a fixed rule that describes how the state variables should be updated as the process moves from state to state and an (optional) end test that specifies conditions under which the process should terminate. In computing +
, the number of steps required grows linearly with “a” or “b”. Such a process is called a linear iterative process. Note that there are no deferred operations here like in the recursive process. Also, note that the process can be iterative and not linear.
The contrast between the two processes can be seen in another way. In the iterative case, the program variables provide a complete description of the state of the process at any point. If we stopped the computation between steps, all we would need to do to resume the computation is to supply the interpreter with the values of the three program variables. Not so with the recursive process. In this case there is some additional hidden information, maintained by the interpreter and not contained in the program variables, which indicates where the process is in negotiating the chain of deferred operations. The longer the chain, the more information must be maintained.
In contrasting iteration and recursion, we must be careful not to confuse the notion of a recursive process with the notion of a recursive procedure. When we describe a procedure as recursive, we are referring to the syntactic fact that the procedure definition refers (either directly or indirectly) to the procedure itself. But when we describe a process as following a pattern that is, say, linearly recursive, we are speaking about how the process evolves, not about the syntax of how a procedure is written. It may seem disturbing that we refer to a recursive procedure such as fact-iter as generating an iterative process. However, the process really is iterative: Its state is captured completely by its three state variables, and an interpreter need keep track of only three variables in order to execute the process.
One reason that the distinction between process and procedure may be confusing is that most implementations of common languages (including Ada, Pascal, and C) are designed in such a way that the interpretation of any recursive procedure consumes an amount of memory that grows with the number of procedure calls, even when the process described is, in principle, iterative. As a consequence, these languages can describe iterative processes only by resorting to special-purpose looping constructs such as do
, repeat
, until
, for
, and while
. The implementation of Scheme we shall consider in chapter 5 does not share this defect. It will execute an iterative process in constant space, even if the iterative process is described by a recursive procedure. An implementation with this property is called tail-recursive. With a tail-recursive implementation, iteration can be expressed using the ordinary procedure call mechanism, so that special iteration constructs are useful only as syntactic sugar.
Another type of recursive process is tree recursive process like the tree generated by fiboncacci process. The fibonacci procedure can be also be written such that it generates the linear iterative process.
Higher Order Procedures
Procedures that manipulate procedures are called higher order procedures. Thus we have procedures that can accept procedures as arguments or that return procedurs as return values.
Example: Procedure as argument(s)
1
2
3
4
5
6
7
8
(define (sum term a next b)
(if (> a b)
0
(+ (term a)
(sum term (next a) next b)
)
)
)
Lambda
Procedure without names. Example: (lambda (x) (+ x 4)
. These can be very handy when need the procedure only once or for a very trivial task like checking if a number is even.
let
Sometimes we need to to define few local variables to reuse. We can use lambda
for the same. But the language provides special construct for the same task let
:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
; using lambda
(define (f x y)
((lambda (a b)
(+ (* x (square a))
(* y b)
(* a b)
)
) ; end of lambda defn.
(+ 1 (* x y)) ;;arguments to lambda
(- 1 y) ;;argumnets to lambda
))
)
; using let
(define (f x y)
(let (
(a (+ 1 (* x y)))
(b (- 1 y))
)
(+ (* x (square a))
(* y b)
(* a b)
)
)
)
Thus a let
expression is simply a syntactic sugar as no new mechanism is required from in the interpreter.
Procedures - first class elements
In general, programming languages impose restrictions on the ways in which computational elements can be manipulated. Elements with the fewest restrictions are said to have first-class status. Some of the rights and privileges of first-class elements are:
- They may be named by variables.
- They may be passed as arguments to procedures.
- They may be returned as the results of procedures.
- They may be included in data structures.
Lisp, unlike other common programming languages, awards procedures full first-class status. This poses challenges for efficient implementation, but the resulting gain in expressive power is enormous.
Interesting/Conceptual Exercises:
Fibonacci: 1.13, 1.19
Order of growth: 1.22,1.23, 1.25, 1.26 - a bit tedious but its interesting to see in practice how time/space is growing compared our analysed order of growth.
Higher Order procedures - accumulate and filter: 1.32, 1.33, 1.37, 1.42, 1.43, 1.46