Researched/Written by Shaun Scott:
Purpose is to Bring Artificial Intelligence awareness to the Community by Peer-to-Peer Review Journals
Literature Review
Significance of Study:
Today, there are two of the most common AI (Artificial Intelligence) computer programming languages are LISP and Prolog. They are designed with two distinct programming paradigms, and LISP is a functional language, whereas Prolog is a formal language. The main difference between these languages is that LISP was considered a computation model based on the theory of recursive functions. In contrast, the prolog includes a set of formal logic specifications that employ first-order predicate calculus.
John McCarthy’s
Algebraic list processing language for artificial intelligence work on the IBM 704 computer arose in the summer of 1956 during the Dartmouth Summer Research Project on Artificial Intelligence which was the first organized study of AI. During this meeting, Newell, Shaw and Simon described IPL 2, a list processing language for Rand Corporation’s JOHN[1]NIAC computer in which they implemented their Logic Theorist program. There was little temptation to copy IPL, because its form was based on a JOHNNIAC loader that happened to be available to them, and because the 2 FORTRAN idea of writing programs algebraically was attractive. It was immediately apparent that arbitrary subexpressions of symbolic expressions could be obtained by composing the functions that extract immediate subexpressions, and this seemed reason enough to go to an algebraic language.
(In the late 1950s, neat output and convenient input notation was not generally considered important. Programs to do the kind of input and output customary today wouldn’t even fit in the memories available at that time. Moreover, keypunches and printers with adequate character sets didn’t exist).
Lisp is one of the oldest programming languages still in widespread use today. There have been many versions of Lisp, each sharing basic features but differing in detail. In this book we use the version called Common Lisp, which is the most widely accepted standard. Lisp has been chosen for three reasons. First, Lisp is the most popular language for AI programming, particularly in the United States. If you’re going to learn a language, it might as well be one with a growing literature, rather than a dead tongue. Second, Lisp makes it easy to capture relevant generalizations in defining new objects. In particular. Lisp makes it easy to define new languages especially targeted to the problem at hand. This is especially handy in AI applications, which often manipulate complex information that is most easily represented in some novel form. Lisp is one of the few languages that allows full flexibility in defining and manipu[1]lating programs as well as data. All programming languages, by definition, provide a means of defining programs, but many other languages limit the ways in which a program can be used, or limit the range of programs that can be defined, or require the programmer to explicitly state irrelevant details. Third, Lisp makes it very easy to develop a working program fast. Lisp programs are concise and are uncluttered by low-level detail. Common Lisp offers an unusually large number of useful predefined objects, including over 700 functions. The programming environment (such as debugging tools, incremental compilers, integrated editors, and interfaces to window systems) that surround Lisp systems are usually very good. And the dynamic, interactive nature of Lisp makes it easy to experiment and change a program while it is being developed.
As is well known, the name “Prolog” was invented in Marseilles in 1972. Philippe Roussel chose the name as an abbreviation for “Programming in Logic” to refer to the software tool designed to implement a man-machine communication system in natural language. It can be said that Prolog was the offspring of a successful marriage between natural language processing and automated theorem[1]331 ALAIN COLMERAUER & PHILIPPE ROUSSEL proving.
The other parts are of a more technical nature. They are devoted to three programming languages that rapidly succeeded one another: Q-systems, conceived for machine translation; the preliminary version of Prolog, created at the same time as its application; and the definitive version of Prolog, created independently of any application.
THE BIRTH OF PROLOG
The entire team then developed a primitive natural-language communication system [Colmerauer 1971]. The interfaces between the logical formulae and French consisted of 50 Q-system rules for the input and 17 Q-system rules for the output. The reasoning part was implemented through one of Philippe’s provers. It was thus possible to have the following conversation with the computer:
User Types in “ Who is Tom?”
Computer Output : “ Tom is a Cat”
User Types in “ What does Tom eat?
Computer Output: “ Tom eats Mice.”
User Types in” Who is the Mouse?”
Computer Output: “The Mouse Name is Jerry”
The logical formulae created made use of: (i) constants representing elements, Tom, Jerry, Cat, Mouse;
(ii) constants representing sets, Cats, Mice;
The, Subset, True.
A term of the form
The ( a ) was taken to represent the set consisting only of the element
- A formula of the form Subset (x, y)
- expressed the inclusion of set x in set y and a formula of the form True (r, x, y) expressed that the sets x and y were in the relation r.
- To the clauses that encoded the sentences,
- Jean Trudel added four clauses relating the three symbols The, Subset, True: (Vx) [Subset (x,x) ] , (Vx) (Vy) (Vz) [Subset (x,y)ASubset (y, z) ~ Subset (x, z) ], (Va) (Vb) [Subset(The(a) ,The(b)) ~ Subset(The(b) ,The(a))] , (VX) (Vy) (Vr) (Vx’) (Vy’) [True(r,x,y)ASubset(x,x’)^Subset(y,y’) ~ True(r,x’,y’)].
Prolog is a declarative logic programming language.. It is an attempt to make a programming language that enables the expression of logic instead of carefully specified instructions on the computer. Prolog has got superb pattern matching mechanism as well as good memory management. It is very commonly used in artificial intelligence applications.
Sometimes there can be several answers which are true in a given circumstances. This is because the program does not terminate as soon as the first answer is found, but keeps going until the entire tree of possibilities has been checked.
Prolog is a conversational language, meaning the user and the computer have a ‘conversation’. Firstly, the user specifies the objects and the relationships that exist between these objects. Prolog can then answer the questions about this set of data.
Prolog has a very compact syntax. From the developers point of view the difficulty of programming in Prolog, is the vagueness in human thinking. Prolog makes the coding easier as the syntax is very short and precise.
A FORERUNNER OF PROLOG, THE Q-SYSTEMS
The history of the birth of Prolog thus comes to a halt at the end of 1975. We now turn to more technical aspects and, first of all, describe the Q-systems, the result of a first gamble: to develop a very high-level programming language, even if the execution times it entailed might seem bewildering [Colmerauer 1970b]. That gamble, and the experience acquired in implementing the Q-systems was determinative for the second gamble:
Prolog. 1.1 One-Way Unification
A Q-system consists of a set of rewriting rules dealing with sequences of complex symbols separated by the sign +. Each rule is of the form el +e2+ …. + em–~ fl +f2+ …. +fn and means: in the sequence of trees we are manipulating, any subsequence of the form el + e2 + …. +em can be replaced by the sequence fl + f2 + …. + fn.
The eis and the fis are parenthe[1]sized expressions representing trees, with a strong resemblance to present Proiog terms but using three types of variables. Depending on whether the variable starts with a letter in the set {A,B,C,D,E,F}, {I,J,K,L,M,N}, or {U,V,W,X,Y,Z} it denotes either a label, a tree, or a (possibly empty) sequence of trees separated by commas.
For example, the rule P + A*(X*,I*,Y*) ~ I* + A*(X*,Y*) (variables are followed by an asterisk) applied to the sequence P + Q(R,S,T) + P produces three possible sequences R + Q(S,T) + P, S + Q(R,T) + P, T + Q(R,S) + P.
The concept of unification was therefore already present but it was one-way only; the variables appeared in the rules but never in the sequence of trees that was being transformed. However, unification took account of the associativity of the concatenation and, as in the preceding example, could produce several results. 338
THE BIRTH OF PROLOG
To describe this multitude of groupings in an economical way
1 use an oriented graph in which each arrow is labeled by a parenthesized expression representing a tree. A Q-system is nothing more than a set of rules allowing such a graph to be transformed into another graph. This information may correspond to an analysis, to a sentence synthesis or to a formal manipulation of this type.
For example the sequence A +A + B + B + C + C is represented by the graph A A B B C C and the application of the four rules A + B + C —) S A + S + X + C —~ S X + C–~ C + X B + B –) B + X produces the graph s C ~ B~ c Cx B~I X
THE PRELIMINARY PROLOG
In other words, the logical formulation was to serve not only for the different modules of the natural language dialogue system but also for the data exchanged between them, including the data exchanged with the user.
THE BIRTH OF PROLOG The choice having settled on first order logic (in clausal form) and not on a higher order logic, it then seemed that we were faced with an insurmountable difficulty in wanting programs to be able to manipulate other programs. Of course, this problem was solved using nonlogical mechanisms.
Future of Natural Processing Languages:
Natural language processing (NLP) is a subfield of Artificial Intelligence (AI).
What is NLP?
Natural language processing, or NLP, combines computational linguistics—rule-based modeling of human language—with statistical and machine learning models to enable computers and digital devices to recognize, understand and generate text and speech.
A branch of artificial intelligence (AI), NLP lies at the heart of applications and devices that can
- translate text from one language to another
- respond to typed or spoken commands
- recognize or authenticate users based on voice
- summarize large volumes of text
- assess the intent or sentiment of text or speech
- generate text or graphics or other content on demand
A text-to-speech system (or “engine”) is composed of two parts: a front-end and a back-end. The front-end has two major tasks. First, it converts raw text containing symbols like numbers and abbreviations into the equivalent of written-out words. This process is often called text normalization, pre-processing, or tokenization. The front-end then assigns phonetic transcriptions to each word and divides and marks the text into prosodic units, like phrases, clauses, and sentences.
The back-end — often referred to as the synthesizer — then converts the symbolic linguistic representation into sound.
There are different ways to perform speech synthesis. Those are Concatenative TTS, Formant Synthesis, Parametric TTS, and Hybrid approaches. The choice depends on the task they are used for.
The text-to-speech (TTS) assistive technology uses artificial intelligence to translate information written in a human-readable form in one language into audio, voice, or speech with a human accent.
Such systems turn text into audio or speech output using AI-driven algorithms as the input. It is also referred to as “real aloud technology” because it reads the text aloud.
Text-to-Speech (TTS) app that utilizes OCR technology to convert PDFs to audio files. While not a traditional OCR to PDF converter, it offers a unique approach by transforming scanned PDFs into spoken content. Speechify uses advanced algorithms and machine learning to recognize and extract text from scanned documents or images. It then converts the extracted text into high-quality speech, allowing users to listen to their PDFs rather than reading them.
Text To Speech A text to speech (TTS) synthesizer is a computer-based system that can read text aloud automatically, regardless of whether the text is introduced by a computer input stream or a scanned input submitted to an Optical character recognition (OCR) engine. A speech synthesizer can be implemented with both hardware and software.
Speech is often based on concatenation of natural speech i.e. units that are taken from natural speech put together to form a word or sentence.
1.2 Write up on Tech Geek History: LISP Interpreters and Lambda FUNCTION EXPLAINED
Prolog – Logic Programming
– Based on Predicate Logic –
Working with predicates –
Computation is reasoning, initiated by a query
– Popular in natural language processing
• LISP – Functional Programming – Based on Lambda Calculus
– Working with functions
– Computation is evaluation
– Used in Artificial Intelligence –
Theory based on Lambda Calculus by Alonzo Church (1930) – l-calculus: theory of functions as formulas – Easier manipulation of functions using expressions
LISP interpreter: An interactive environment, always evaluating input
The heart of the Lisp interpreter is the “read-eval-print” loop. That is, the interpreter does the following three jobs over and over:
- read an input expression
- evaluate the expression
- print the results
This loop can be written in Lisp itself very simply:
(loop (print (eval (read)))
Of course, this doesn’t take into account error handling, multiple values, and so on, but much of what Lisp does, including some things that seem unintuitive, can be understood by reasoning about the above three step process.
The Lisp Reader
The input to the Lisp reader is a sequence of characters. The output is a Lisp expression. The job of the reader is to convert objects in character world to objects in expression world.
In character world, there are no pointers, no lists, no symbols, just characters like A, b, `, and (. That is, (A B (C D) E) is not a list, it is a sequence of parentheses, spaces, and letters. “abc” is not a string, it is a sequence of letters.
In expression world, there are no parentheses, no spaces, no quote marks, just atoms and lists. That is, (A B (C D) E) has no parentheses or spaces. It is a list of three atoms and a list. “abc” has no quote marks. It is a string with three characters.
The Common Lisp reader is quite complex (Steele has a full description), but it works roughly like this:
- If the reader sees a character with read-macro function, it calls that function, and returns whatever expression that function returns. For example, ‘ has a pre-defined read-macro function that basically says (list (quote quote) (read)). Thus, ‘abc becomes (quote abc).
- If the reader sees a “, it collects characters until it sees another “, and creates a string object with the characters found between the string quotes. Thus, the 5 characters “abc” become the 3-character string with the characters a, b, and c.
- If the reader sees a digit (0 – 9), it collects the digits that follow, and creates the number those digits represent (base 10). Thus, the three characters 123 become the single number 123.
- If the reader sees an alphabetic character, it collects characters until it sees a space or parenthesis, and then gets the symbol (using Lisp’s intern function) that has those characters (converted to upper case) as its name. Thus the three characters abc become the single unique symbol ABC.
- If the reader sees a left parenthesis, it reads more characters, turning them into expressions, using all these rules, until it sees a right parenthesis. Then it returns the list of expressions read. Thus, the 7 characters (A B C) become the 3 element list (A B C).
The Lisp Evaluator
The Lisp evaluator takes an expression and returns an expression. The expression returned is called the value of the expression evaluated. [I’ll ignore expressions that return multiple values here.]
The rules for evaluation are surprisingly simple. Graphically, it looks like this:
Figure 1.1

First, there are two rules for atoms:
- The value of a symbol is looked up in the current lexical environments. If none is found, symbol-value is called. If that fails, there’s an error (unbound variable).
- The value of any other kind of atom is the atom itself.
There are three rules for lists that start with a symbol. If they don’t work, there’s an error. The rules are:
- If the symbol names a special function, use the rules for evaluation stored with the special function.
- If the symbol has a macro function attached, apply the macro function to the rest of the list, and evaluate the result.
- If the symbol has a normal function attached, evaluate the other elements in the list, and apply the function to a list of the values returned. The value the function returns is the final value.
There is one other case for lists, but it appears rarely in modern code:
- If the list has the form ((lambda (vars) exps) args), then evaluate the arguments, and apply the lambda function to the list of results. The value the lambda returns is the final value.
Anything that can be done with a lambda can also be done with either let or destructuring-bind.
The Lisp Printer
The input to the Lisp printer is a Lisp expression. The output is a sequence of characters. The job of the printer is to convert objects in expression world to objects in character world.
The printer’s rules are pretty simple. They all involve taking an expression and printing a sequence of characters:
- If the expression is an string, print “ then the characters in the string, then another “.
- If the expression is a symbol, print the characters in the symbol’s name.
- If the expression is a number, print the digits representing that numbers (base 10).
- If the expression is a list, print a left parenthesis, then print each element in the list, with a space after each element but the last, then print a right parenthesis.
There are also rules for printing structures, special characters, and so on.
The purpose of the above rules is to make the following as true as possible:
(equal exp (read-from-string (write-to-string exp)))
Often, however, that is not what we want. For example, we probably want the string “Pick an option:” to appear on the screen as
Pick an option:
not
“Pick an option:”
The latter is correct in so far as, if Lisp read it, it would see a string, but we want people to read this message, not Lisp.
Use functions like write-string, princ, and the ~A format command to print things “for people” rather than for Lisp. However, don’t use these functions for printing debugging information. The information these functions hide for readability may be exactly what you need for debugging.
Closures
Closure-Oriented Programming
One of the conclusions that we reached was that the “object” need not be a primitive notion in a programming language; one can build objects and their behaviour from little more than assignable value cells and good old lambda expressions. —Guy Steele on the design of Scheme
Sometimes it’s called a closure, other times a saved lexical environment. Or, as some of us like to say, let over lambda. Whatever terminology you use, mastering this concept of a closure is the first step to becoming a professional lisp programmer. In fact, this skill is vital for the proper use of many modern programming languages, even ones that don’t explicitly contain let or lambda, such as Perl or Javascript.
Closures are one of those few curious concepts that are paradoxically difficult because they are so simple. Once a programmer becomes used to a complex solution to a problem, simple solutions to the same problem feel incomplete and uncomfortable. But, as we will soon see, closures can be a simpler, more direct solution to the problem of how to organise data and code than objects. Even more important than their simplicity, closures represent a better abstraction to use when constructing macros—the topic of this book.
The fact that we can build objects and classes with our closure primitives doesn’t mean that object systems are useless to lisp programmers. Far from it. In fact, COMMON LISP includes one of the most powerful object systems ever devised: CLOS, the COMMON LISP Object System. Although I am very impressed with the flexibility and features of CLOS, I seldom find a need to use its more advanced features1, thanks to assignable value cells and good old lambda expressions.
Environments and Extent
What Steele means by assignable value cells is an environment for storing pointers to data where the environment is subject to something called indefinite extent. This is a fancy way of saying that we can continue to refer to such an environment at any time in the future. Once we allocate this environment, it and its references are there to stay as long as we need them. Consider this C function:
#include <stdlib.h>
int *environment_with_indefinite_extent(int input) {
int *a = malloc(sizeof(int));
*a = input;
return a;
}
After we call this function and receive the pointer it returns, we can continue to refer to the allocated memory indefinitely. In C, new environments are created when invoking a function, but C programmers know to malloc() the required memory when returning it for use outside the function.
By contrast, the example below is flawed. C programmers consider a to be automatically collected when the function returns because the environment is allocated on the stack. In other words, according to lisp programmers, a is allocated with temporary extent.
int *environment_with_temporary_extent(int input) {
int a = input;
return &a;
}
The difference between C environments and lisp environments is that unless you explicitly tell lisp otherwise it always assumes you mean to use indefinite extent. In other words, lisp always assumes you mean to call malloc() as above. It can be argued that this is inherently less efficient than using temporary extent, but the benefits almost always exceed the marginal performance costs. What’s more, lisp can often determine when data can safely be allocated on the stack and will do so automatically. You can even use declarations to tell lisp to do this explicitly.
But because of lisp’s dynamic nature, it doesn’t have explicit pointer values or types like C. This can be confusing if you, as a C programmer, are used to casting pointers and values to indicate types. Lisp thinks about all this slightly differently. In lisp, a handy mantra is the following:
Variables don’t have types. Only values have types.
Still, we have to return something to hold pointers. In lisp there are many data structures that can store pointers. One of the most favoured by lisp programmers is a simple structure: the cons cell. Each cons cell holds exactly two pointers, affectionately called car and cdr. When environment-with-indefinite-extent is invoked, a cons cell will be returned with the car pointing to whatever was passed as input and the cdr pointing to nil. And, most importantly, this cons cell (and with it the pointer to input) has indefinite extent so we can continue to refer to it as long as we need to:
(defun environment-with-indefinite-extent (input)
(cons input nil))
The efficiency disadvantages of indefinite extent are approaching irrelevance as the state of the art in lisp compilation technology improves. Environments and extent are closely related to closures and more will be said about them throughout this chapter.
Lexical and Dynamic Scope
The technical term for where to consider a variable reference valid is scope. The most common type of scope in modern languages is called lexical scope. When a fragment of code is surrounded by the lexical binding of a variable, that variable is said to be in the lexical scope of the binding. The let form, which is one of the most common ways to create bindings, can introduce these lexically scoped variables:
* (let ((x 2))
x)
2
The x inside the body of the let form was accessed through lexical scope. Similarly, arguments to functions defined by lambda or defun are also lexically bound variables inside the text of the function definition. Lexical variables are variables that can only be accessed by code appearing inside the context of, for instance, the above let form. Because lexical scoping is such an intuitive way to limit the scope of access to a variable, it can appear to be the only way. Are there any other possibilities for scoping?
As useful as the combination of indefinite extent and lexical scoping turns out to be, it has until recently not been used to its fullest extent in mainstream programming languages. The first implementation was by Steve Russell for Lisp 1.5[HISTORY-OF-LISP] and was subsequently designed directly into languages like Algol-60, Scheme, and COMMON LISP. Despite this long and fruitful history, the numerous advantages of lexical scoping are only slowly being taken up by many Blubs.
Although the scoping methods provided by C-like languages are limited, C programmers need to program across different environments too. To do so, they often use an imprecisely defined scoping known as pointer scope. Pointer scope is famous for its difficulty to debug, numerous security risks, and, somewhat artificially, its efficiency. The idea behind pointer scoping is to define a domain specific language for controlling the registers and memory of a Von Neumman machine similar to most modern CPUs[PAIP-PIX], then to use this language to access and manipulate data-structures with fairly direct commands to the CPU running the program. Pointer scoping was necessary for performance reasons before decent lisp compilers were invented but is now regarded as a problem with, rather than a feature of, modern programming languages.
Even though lisp programmers seldom think in terms of pointers, the understanding of pointer scoping is very valuable in the construction of efficient lisp code.
#include <stdio.h>
void pointer_scope_test() {
int a;
scanf(“%d”, &a);
}
In the above function we use the C & operator to give the actual address in memory of our local variable a to the scanf function so it knows where to write the data it scans. Lexical scoping in lisp forbids us from implementing this directly. In lisp, we would likely pass an anonymous function to a hypothetical lisp scanf function, allowing it to set our lexical variable a even though scanf is defined outside our lexical scope:
(let (a)
(scanf “%d” (lambda (v) (setf a v))))
Lexical scope is the enabling feature for closures. In fact, closures are so related to this concept of lexical scope that they are often referred to more specifically as lexical closures to distinguish them from other types of closures. Unless otherwise noted, all closures in this book are lexical.
In addition to lexical scope, COMMON LISP provides dynamic scope. This is lisp slang for the combination of temporary extent and global scope. Dynamic scoping is a type of scoping that is unique to lisp in that it offers a very different behaviour but shares an identical syntax with lexical scope. In COMMON LISP we deliberately choose to call attention to variables accessed with dynamic scope by calling them special variables. These special variables can be defined with defvar. Some programmers follow a convention of prefixing and postfixing special variable names with asterisks, like *temp-special*. This is called the earmuff convention.
Special variable declarations look like this:
(defvar temp-special)
When defined like this, temp-special will be designated special2 but will not be initialised with a value. In this state, a special variable is said to be unbound. Only special variables can be unbound—lexical variables are always bound and thus always have values. Another way of thinking of this is that by default all symbols represent lexically unbound variables. Just as with lexical variables, we can assign a value to special variables with setq or setf. Some lisps, like Scheme, do not have dynamic scope. Others, like EuLisp[SMALL-PIECES-P46], use different syntax for accessing lexical versus special variables. But in COMMON LISP the syntax is shared. Many lispers consider this a feature. Here we assign a value to our special variable temp-special:
(setq temp-special 1)
So far, this special variable doesn’t seem that special. It seems to be just another variable, bound in some sort of global namespace. This is because we have only bound it once—its default special global binding. Special variables are most interesting when they are re-bound, or shadowed, by new environments. If we define a function that simply evaluates and returns temp-special:
(defun temp-special-returner ()
temp-special)
This function can be used to examine the value that lisp evaluates temp-special to be at the moment in time when it was called:
* (temp-special-returner)
1
This is sometimes referred to as evaluating the form in a null lexical environment. The null lexical environment obviously doesn’t contain any lexical bindings. Here the value of temp-special returned is that of its global special value, 1. But if we evaluate it in a non-null lexical environment—one that contains a binding for our special variable—the specialness of temp-special reveals itself3:
* (let ((temp-special 2))
(temp-special-returner))
2
Notice that the value 2 was returned, meaning that the temp-special value was taken from our let environment, not its global special value. If this still does not seem interesting, see how this cannot be done in most other conventional programming languages as exemplified by this piece of Blub pseudo-code:
int global_var = 0;
function whatever() {
int global_var = 1;
do_stuff_that_uses_global_var();
}
function do_stuff_that_uses_global_var() {
// global_var is 0
}
While the memory locations or register assignments for lexical bindings are known at compile-time4, special variable bindings are determined at run-time—in a sense. Thanks to a clever trick, special variables aren’t as inefficient as they seem. A special variable actually always does refer to the same location in memory. When you use let to bind a special variable, you are actually compiling in code that will store a copy of the variable, over-write the memory location with a new value, evaluate the forms in the let body, and, finally, restore the original value from the copy.
Special variables are perpetually associated with the symbol used to name them. The location in memory referred to by a special variable is called the symbol-value cell of a symbol. This is in direct contrast to lexical variables. Lexical variables are only indicated with symbols at compile-time. Because lexical variables can only be accessed from inside the lexical scope of their bindings, the compiler has no reason to even remember the symbols that were used to reference lexical variables so it will remove them from compiled code.
Although COMMON LISP does offer the invaluable feature of dynamic scope, lexical variables are the most common. Dynamic scoping used to be a defining feature of lisp but has, since COMMON LISP, been almost completely replaced by lexical scope. Since lexical scoping enables things like lexical closures (which we examine shortly), as well as more effective compiler optimisations, the superseding of dynamic scope is mostly seen as a good thing. However, the designers of COMMON LISP have left us a very transparent window into the world of dynamic scoping, now acknowledged for what it really is: special.
Let It Be Lambda
Let is a lisp special form for creating an environment with names (bindings) initialised to the results of evaluating corresponding forms. These names are available to the code inside the let body while its forms are evaluated consecutively, returning the result of the final form. Although what let does is unambiguous, how it does it is deliberately left unspecified. What let does is separated from how it does it. Somehow, let needs to provide a data structure for storing pointers to values.
Cons cells are undeniably useful for holding pointers, as we saw above, but there are numerous structures that can be used. One of the best ways to store pointers in lisp is to let lisp take care of it for you with the let form. With let you only have to name (bind) these pointers and lisp will figure out how best to store them for you. Sometimes we can help the compiler make this more efficient by giving it extra bits of information in the form of declarations:
(defun register-allocated-fixnum ()
(declare (optimize (speed 3) (safety 0)))
(let ((acc 0))
(loop for i from 1 to 100 do
(incf (the fixnum acc)
(the fixnum i)))
acc))
For example, in register-allocated-fixnum we provide some hints to the compiler that allow it to sum the integers from 1 to 100 very efficiently. When compiled, this function will allocate the data in registers, eliminating the need for pointers altogether. Even though it seems we’ve asked lisp to create an indefinite extent environment to hold acc and i, a lisp compiler will be able to optimise this function by storing the values solely in CPU registers. The result might be this machine code:
; 090CEB52: 31C9 XOR ECX, ECX
; 54: B804000000 MOV EAX, 4
; 59: EB05 JMP L1
; 5B: L0: 01C1 ADD ECX, EAX
; 5D: 83C004 ADD EAX, 4
; 60: L1: 3D90010000 CMP EAX, 400
; 65: 7EF4 JLE L0
Notice that 4 represents 1 and 400 represents 100 because fixnums are shifted by two bits in compiled code. This has to do with tagging, a way to pretend that something is a pointer but actually store data inside it. Our lisp compiler’s tagging scheme has the nice benefit that no shifting needs to occur to index word aligned memory
But if lisp determines that you might want to refer to this environment later on it will have to use something less transient than a register. A common structure for storing pointers in environments is an array. If each environment has an array and all the variable references enclosed in that environment are just references into this array, we have an efficient environment with potentially indefinite extent.
As mentioned above, let will return the evaluation of the last form in its body. This is common for many lisp special forms and macros, so common that this pattern is often referred to as an implicit progn due to the progn special form designed to do nothing but this5. Sometimes the most valuable thing to have a let form return is an anonymous function which takes advantage of the lexical environment supplied by the let form. To create these functions in lisp we use lambda.
Lambda is a simple concept that can be intimidating because of its flexibility and importance. The lambda from lisp and scheme owes its roots to Alonzo Church’s logic system but has evolved and adapted into its altogether own lisp specification. Lambda is a concise way to repeatably assign temporary names (bindings) to values for a specific lexical context and underlies lisp’s concept of a function. A lisp function is very different from the mathematical function description that Church had in mind. This is because lambda has evolved as a powerful, practical tool at the hands of generations of lispers, stretching and extending it much further than early logicians could have foreseen.
Despite the reverence lisp programmers have for lambda, there is nothing inherently special about the notation. As we will see, lambda is just one of many possible ways to express this sort of variable naming. In particular, we will see that macros allow us to customise the renaming of variables in ways that are effectively impossible in other programming languages. But after exploring this, we will return to lambda and discover that it is very close to the optimal notation for expressing such naming. This is no accident. Church, as dated and irrelevant as he might seem to our modern programming environment, really was on to something. His mathematical notation, along with its numerous enhancements in the hands of generations of lisp professionals, has evolved into a flexible, general tool6.
Lambda is so useful that, like many of lisp’s features, most modern languages are beginning to import the idea from lisp into their own systems. Some language designers feel that lambda is too lengthy, instead using fn or some other abbreviation. On the other hand, some regard lambda as a concept so fundamental that obscuring it with a lesser name is next to heresy. In this book, although we will describe and explore many variations on lambda, we happily call it lambda, just as generations of lisp programmers before us.
But what is lisp’s lambda? First off, as with all names in lisp, lambda is a symbol. We can quote it, compare it, and store it in lists. Lambda only has a special meaning when it appears as the first element of a list. When it appears there, the list is referred to as a lambda form or as a function designator. But this form is not a function. This form is a list data structure that can be converted into a function using the function special form:
* (function ‘(lambda (x) (+ 1 x)))
#<Interpreted Function>
COMMON LISP provides us a convenience shortcut for this with the #’ (sharp-quote) read macro. Instead of writing function as above, for the same effect we can take advantage of this shortcut:
* #'(lambda (x) (+ 1 x))
#<Interpreted Function>
As a further convenience feature, lambda is also defined as a macro that expands into a call to the function special form above. The COMMON LISP ANSI standard requires[ANSI-CL-ISO-COMPATIBILITY] a lambda macro defined like so:
(defmacro lambda (&whole form &rest body)
(declare (ignore body))
`#’,form)
Ignore the ignore declaration for now7. This macro is just a simple way to automatically apply the function special form to your function designators. This macro allows us to evaluate function designators to create functions because they are expanded into sharp-quoted forms:
* (lambda (x) (+ 1 x))
#<Interpreted Function>
There are few good reasons to prefix your lambda forms with #’ thanks to the lambda macro. Because this book makes no effort to support pre-ANSI COMMON LISP environments, backwards compatibility reasons are easily rejected. But what about stylistic objections? Paul Graham, in ANSI COMMON LISP[GRAHAM-ANSI-CL], considers this macro, along with its brevity benefits, a “specious sort of elegance at best”. Graham’s objection seems to be that since you still need to sharp-quote functions referenced by symbols, the system seems asymmetric. However, I believe that not sharp-quoting lambda forms is actually a stylistic improvement because it highlights the asymmetry that exists in the second namespace specification. Using sharp-quote for symbols is for referring to the second namespace, whereas functions created by lambda forms are, of course, nameless.
Without even invoking the lambda macro, we can use lambda forms as the first argument in a function call. Just like when a symbol is found in this position and lisp assumes we are referencing the symbol-function cell of the symbol, if a lambda form is found, it is assumed to represent an anonymous function:
* ((lambda (x) (+ 1 x)) 2)
But note that just as you can’t call a function to dynamically return the symbol to be used in a regular function call, you can’t call a function to return a lambda form in the function position. For both of these tasks, use either funcall or apply.
A benefit of lambda expressions that is largely foreign to functions in C and other languages is that lisp compilers can often optimise them out of existence completely. For example, although compiler-test looks like it applies an increment function to the number 2 and returns the result, a decent compiler will be smart enough to know that this function always returns the value 3 and will simply return that number directly, invoking no functions in the process. This is called lambda folding:
(defun compiler-test ()
(funcall
(lambda (x) (+ 1 x))
2))
An important efficiency observation is that a compiled lambda form is a constant form. This means that after your program is compiled, all references to that function are simply pointers to a chunk of machine code. This pointer can be returned from functions and embedded in new environments, all with no function creation overhead. The overhead was absorbed when the program was compiled. In other words, a function that returns another function will simply be a constant time pointer return function:
(defun lambda-returner ()
(lambda (x) (+ 1 x)))
This is in direct contrast to the let form, which is designed to create a new environment at run-time and as such is usually not a constant operation because of the garbage collection overhead implied by lexical closures, which are of indefinite extent.
(defun let-over-lambda-returner ()
(let ((y 1))
(lambda (x)
(incf y x))))
Every time let-over-lambda-returner is invoked, it must create a new environment, embed the constant pointer to the code represented by the lambda form into this new environment, then return the resulting closure. We can use time to see just how small this environment is:
* (progn
(compile ‘let-over-lambda-returner)
(time (let-over-lambda-returner)))
; Evaluation took:
; …
; 24 bytes consed.
;
#<Closure Over Function>
If you try to call compile on a closure, you will get an error saying you can’t compile functions defined in non-null lexical environments[CLTL2-P677]. You can’t compile closures, only the functions that create closures. When you compile a function that creates closures, the closures it creates will also be compiled[ON-LISP-P25].
The use of a let enclosing a lambda above is so important that we will spend the remainder of this chapter discussing the pattern and variations on it.
.
(lambda ()
(let ((counter 0))
(lambda () (incf counter))))
When the lambda over let over lambda is invoked, a new closure containing a counter binding will be created and returned. Remember that lambda expressions are constants: mere pointers to machine code. This expression is a simple bit of code that creates new environments to close over the inner lambda expression (which is itself a constant, compiled form), just as we were doing at the REPL.
With object systems, a piece of code that creates objects is called a class. But lambda over let over lambda is subtly different than the classes of many languages. While most languages require classes to be named, this pattern avoids naming altogether. Lambda over let over lambda forms can be called anonymous classes.
Although anonymous classes are often useful, we usually do name classes. The easiest way to give them names is to recognise that such classes are regular functions. How do we normally name functions? With the defun form, of course. After naming, the above anonymous class becomes:
(defun counter-class ()
(let ((counter 0))
(lambda () (incf counter))))
Where did the first lambda go? Defun supplies an implicit lambda around the forms in its body. When you write regular functions with defun they are still lambda forms underneath but this fact is hidden beneath the surface of the defun syntax.
BLOCK-SCANNER
(defun block-scanner (trigger-string)
(let* ((trig (coerce trigger-string ‘list))
(curr trig))
(lambda (data-string)
(let ((data (coerce data-string ‘list)))
(dolist (c data)
(if curr
(setq curr
(if (char= (car curr) c)
(cdr curr) ; next char
trig)))) ; start over
(not curr))))) ; return t if found
In order to motivate the use of closures, a realistic example is presented: block-scanner. The problem block-scanner solves is that for some forms of data transfer the data is delivered in groups (blocks) of uncertain sizes. These sizes are generally convenient for the underlying system but not for the application programmer, often being determined by things like operating system buffers, hard drive blocks, or network packets. Scanning a stream of data for a specific sequence requires more than just scanning each block as it comes in with a regular, stateless procedure. We need to keep state between the scanning of each block because it is possible that the sequence we are scanning for will be split between two (or more) blocks.
The most straightforward, natural way to implement this stored state in modern languages is with a closure. An initial sketch of a closure-based block scanner is given as block-scanner. Like all lisp development, creating closures is an iterative process. We might start off with code given in block-scanner and decide to improve its efficiency by avoiding coercion of strings to lists, or possibly improve the information gathered by counting the number of occurrences of the sequence.
Although block-scanner is an initial implementation waiting to be improved, it is still a good demonstration of the use of lambda over let over lambda. Here is a demonstration of its use, pretending to be some sort of communications tap watching out for a specific black-listed word, jihad:
* (defvar scanner
(block-scanner “jihad”))
SCANNER
* (funcall scanner “We will start “)
NIL
# (funcall scanner “the ji”)
NIL
* (funcall scanner “had tomorrow.”)
(let ((direction ‘up))
(defun toggle-counter-direction ()
(setq direction
(if (eq direction ‘up)
‘down
‘up)))
(defun counter-class ()
(let ((counter 0))
(lambda ()
(if (eq direction ‘up)
(incf counter)
(decf counter))))))
LAMBDA as a renaming operation which has more to do with the internal workings of the compiler (or the interpreter), and with a notation for indicating where quantities are referred to, than with the semantics as such of the computation to be performed by the program..
An Example: Compiling a Simple Function One of the important consequences of the view of LAMBDA and function calls presented above is that programs written in a style based on the lambda— calculus-theoretic models of higher-level constructs such as DO loops
Thus , compiler temporaries and simple user variables are treated on a completely equal basis. This idea was used in (Johnsson 75] but without any explanation of why such equal treatment is justified. Here we have some indication that there is conceptually no difference between a user variable and a compiler-generated temporary
In any case, names are eliminated at compile time, and so by run time the distinction between user names and the compiler ’s generated names has been lost. Thus, at the low level, we may view LAMBDA as a renaming operation which has more to do with the internal workings of the compiler (or the interpreter), and with a notation for indicating where quantities are referred to, than with the semantics as such of the computation to be performed by the program.
Furthermore, the ‘access depth’ of a lexical variable is equal to the number of closures which contain it, and in typical programs this depth is small (less than 5). In an optimizing conpiler for lexically scoped LISP it would not necessary to create environment structures in a standard form . Local variables could be kept in any available registers if desired. It would not necessary to interface these environment structures to the interpreter. Because the scoping would be strictly lexical, a reference to a variable in a compiled environment structure must occur in compiled code appearing within the LAMBDA that bound the variable, and so no interpreted reference could refer to such a variable. Similarly, no compiled variable reference could refer to an environment structure created by the interpreter.
This problem can be fixed in any case if the compiler outputs an appropriate map of variable locations for use by the debugger.)
Basic Issues The compiler will need to perform a large amount of global data flow and data type analysis; much of this can be based on the approach used by Wuif in the BLISS-U compiler [Wulf 75), augmented by some general flow-graph analysis scheme. (The BLISS-li flow analysis is not sufficiently general in that it only analyzes single functions, and within a function the only control constructs are conditionals, loops, and block exits.)
1.3 Write up on Tech Geek History: Prolog extensive Summary
Logic Programming is the name of a programming paradigm which was developed in the 70s. Rather than viewing a computer program as a step-by-step description of an algorithm, the program is conceived as a logical theory, and a procedure call is viewed as a theorem of which the truth needs to be established. Thus, executing a program means searching for a proof. In traditional (imperative) programming languages, the program is a procedural specification of how a problem needs to be solved. In contrast, a logic program concentrates on a declarative specification of what the problem is. Readers familiar with imperative programming will find that Logic Programming requires quite a different way of thinking. Indeed, their knowledge of the imperative paradigm will be partly incompatible with the logic paradigm.
This is certainly true with regard to the concept of a program variable. In imperative languages, a variable is a name for a memory location which can store data of certain types. While the contents of the location may vary over time, the variable always points to the same location. In fact, the term ‘variable’ is a bit of a misnomer here, since it refers to a value that is well-defined at every moment. In contrast, a variable in a logic program is a variable in the mathematical sense, i.e. a placeholder that can take on any value. In this respect, Logic Programming is therefore much closer to mathematical intuition than imperative programming.
Imperative programming and Logic Programming also differ with respect to the machine model they assume. A machine model is an abstraction of the computer on which programs are executed. The imperative paradigm assumes a dynamic, state-based machine model, where the state of the computer is given by the contents of its memory. The effect of a program statement is a transition from one state to another. Logic Programming does not assume such a dynamic machine model. Computer plus program represent a certain amount of knowledge about the world, which is used to answer queries
https://book.simply-logical.space/src/simply-logical.html
The Prolog programmer straightforwardly describes the grandfather concept in explicit, natural terms: a grandfather is a father of a parent. Here is the Prolog notation: grandfather(X, Z) :- father( X, Y), parent( Y, Z)
Prolog is a programming language centered around a small set of basic mechanisms, including pattern matching, tree-based data structuring, and automatic backtracking. This small set constitutes a surprisingly powerful and flexible programming framework. Prolog is especially well suited for problems that involve objects – in particular, structured objects – and relations between them. For example, it is an easy exercise in Prolog to express the spatial relationships suggested in the cover illustration – such as, the top sphere is behind the left one. It is also easy to state a more general rule: if
X is closer to the observer than Y and Y is closer than Z,
then X must be closer than Z.
Prolog can now reason about the spatial relations and their consistency with respect to the general rule. Features like this make Prolog a powerful language for Artificial Intelligence and non-numerical programming in general. Prolog stands for programming in logic – an idea that emerged in the early 1970s to use logic as a programming language. The earty developers of this idea included Robert Kowalski at Edinburgh (on the theoretical side), Maarten van Emden at Edinburgh (experimental demonstration), and Alain Colmerauer at Marseilles (implementation). The present popularity of Prolog is largely due to David Warren’s efficient implementation at Edinburgh in the mid 1970s.
Since Prolog has its roots in mathematical logic it is often introduced through logic. However, such a mathematically intensive introduction is not very useful if the aim is to teach Prolog as a practical programming tool. Whereas conventional languages are procedurally oriented, Prolog introduces the descriptive, or declarative, view.
This greatly alters the way of thinking about problems and makes learning to program in Prolog an exciting intellectual challenge.
Fundamental AI techniques are introduced and developed in depth towards their implementation in Prolog, resulting in complete programs. These can be used as building blocks for sophisticated applications. Techniques to handle important data structures, such as trees and graphs? are also included PREFACE although they do not strictly belong to AI. These techniques are often used in AI programs and their implementation helps to learn the general skills of Prolog programming.
Throughout, the emphasis is on the clarity of programs; efficiency tricks that rely on implementation-dependent features are avoided. Prolog and Artificial Intelligence. It can be used in a Prolog course or in an AI course in which the principles of AI are brought to life through Prolog.
Characteristics of PROLOG There are a number of reasons why Prolog is so suitable for the development of advanced software systems:
- Logic programming. Prolog is a logical language; the meaning of a significant frac tion of the Prolog language can be completely described and understood in terms of the Horn subset of first-order predicate logic (Horn-clause logic). So, this is the mathematical foundation of the language, explaining its expressiveness.
- A single data structure as the foundation of the language. Prolog offers the term as the basic data structure to implement any other data structure (lists, arrays, trees, records, queues, etc.). The entire language is tailored to the manipulation of terms.
- Simple syntax. The syntax of Prolog is much simpler than that of the imperative programming languages. A Prolog program is actually from a syntactical point of view simply a sequence of terms.
For example, the following Prolog program p(X) :- q(X). corresponds to the term :-(p(X),q(X)). Whereas the definition of the formal syntax of a language such as Java or Pascal requires many pages, Prolog’s syntax can be described in a few lines. Finally, the syntax of data is exactly the same as the syntax of programs!
Backtracking is a mechanism in Prolog that is offered to systematically search for solutions of a problem specified in terms of Prolog clauses. A particular Prolog specification may have more than one solution, and Prolog normally tries to find one of those. When it is not possible to prove a subgoal given an already partially determined solution to a problem, Prolog does undo all substitutions which were made to reach this subgoal, and alternative substitutions are tried. The search space that is examined by the Prolog system has the form of a tree. More insight into the structure of this space is obtained by a Prolog program that itself defines a tree data structure: /1/ branch(a,b). /2/ branch(a,c). /3/ branch(c,d). /4/ branch(c,e). /5/ path(X,X). /6/ path(X,Y) : branch(X,Z), path(Z,Y)
Prolog was initially designed to process natural languages. Its application in various problem solving areas has brought out its qualities, but has also made clear its limits. Some of these limitations have been overcome as a result of more and more efficient implementations and ever richer environments. The fact remains, however, that the core of Prolog, namely, Alan Robinson’s unification algorithm [22], has not fundamentally changed since the time of the first Prolog implementations, and is becoming less and less significant compared to the ever-increasing number of external procedures as, for example, the procedures used for numerical processing. These external procedures are not easy to use. Their invocation requires that certain parameters are completely known, and this is not in line with the general Prolog philosophy that it should be possible anywhere and at any time to talk about an unknown object x. In order to improve this state of affairs, we have fundamentally reshaped Prolog by integrating at the unification level : (1) a refined manipulation of trees, including infinite trees, together with a specific treatment of lists, (2) a complete treatment of two-valued Boolean algebra, (3) a treatment of the operations of addition, subtraction, multiplication by a constant and of the relations , =, (4) the general processing of the relation ?. By doing so we replace the very concept of unification by the concept of constraint solving in a chosen mathematical structure. By mathematical structure we mean here a domain equipped with operations and relations, the operations being not necessarily defined everywhere. The result of incorporating the above features into Prolog is the new programming language Prolog III. In this paper1 we establish its foundations and illustrate its capabilities using representative examples. These foundations, which in fact apply to a whole family of “Prolog III like” programming languages, will be presented by means of simple mathematical concepts without explicit recourse to first-order logic
https://cs.bme.hu/~szeredi/nlp/Colmerauer-Prolog-III.pdf
There are only three basic constructs in Prolog: facts, rules, and queries.
A collection of facts and rules is called a knowledge base (or a database) and Prolog programming is all about writing knowledge bases. That is, Prolog programs simply are knowledge bases, collections of facts and rules which describe some collection of relationships that we find interesting.
So how do we use a Prolog program? By posing queries. That is, by asking questions about the information stored in the knowledge base. The computer will automatically find the answer (either True or False) to our queries.
Prolog is a language built around the Logical Paradigm: a declarative approach to problem-solving.
There are only three basic constructs in Prolog: facts, rules, and queries.
A collection of facts and rules is called a knowledge base (or a database) and Prolog programming is all about writing knowledge bases. That is, Prolog programs simply are knowledge bases, collections of facts and rules which describe some collection of relationships that we find interesting.
So how do we use a Prolog program? By posing queries. That is, by asking questions about the information stored in the knowledge base. The computer will automatically find the answer (either True or False) to our queries.
https://www.101computing.net/prolog-family-tree/
Prolog under a database perspective. Prolog is interesting for students and researchers in the database area for a number of reasons:
- Prolog itself may serve as a prototype database language as relational algebra can be expressed directly, including tabular relations, views and integrity constraints
• Prolog is a programming language that is very easy to learn and in which it is fairly straight forward to implement and experiment with different database functionalities.
• Prolog is based on first-order logic so that programs have a clear semantics that can refer to a large body of knowledge and tradition.
• • It represents live logic in a way that is much more appealing than a myriad of Greek letters and strange symbols on paper. And a good supplement to or a starting point for the one who wants to dig into the database literature which is inherently loaded with logic.
The real advantage of Prolog in this context is as an experimental tool, which can give a clearer understanding of underlying concepts and serve as workbench for developing new methods in the shape of running prototypes. It is much easier to play with different variations of some advanced database functionality in a Prolog program than doing so by modifying the source code of a big relational database system! In what follows, we introduce Prolog to a level which is intended to make it possible for the reader to write interesting and working Prolog programs, and to see the similarity with databases on the one side and logic on the other. Prolog as they are useful for the understanding o - f databases and their semantics in general
Prolog is a programming language based on a subset of first-order logic and the syntactic and
semantic notions of Prolog are more or less taken over from logic. There are, however, a few (and
occasionally quite unfortunate) clashes in usage between the two worlds that will be noticed in the
sequel, mostly as footnotes.
In the following we introduce the basic elements of the Prolog language, which coincide with
the subset called Datalog;
Datalog, like Prolog, is a logic programming language. However, the semantics of Datalog differ from the
semantics of Prolog. Syntactically, Datalog is a subset of Prolog.
In a Datalog program, the order of clauses are not important: a Datalog program can be thought of
as a set of clauses, rather than a list. Moreover, unlike Prolog queries, Datalog queries (on finite sets) are
guaranteed to terminate. Indeed, evaluation of Datalog programs is polynomial in the size of the program.
More specifically, it is possible to enumerate all the ground terms implied by a Datalog program (i.e., a set
of clauses) in time polynomial in the number of clauses in the program. (The degree of the polynomial is
essentially the maximum number of variables that appear in any rule.)
In order to achieve decidability, Datalog is less expressive than Prolog. In particular, Datalog makes the
following restrictions.
- Datalog does not allow functions with arity greater than 0. That is, the syntax for terms permits only
variables and constants.
Terms s, t ::“ X | f
So, for example, whereas appendpconspalice, []q, []q is a syntactically valid Prolog atom, it is not syntactically valid Datalog. - Datalog requires that any variable that appears in the head of a clause also appears in the body of the
clause. For example, the clause foopX, Y q :– barpXq. is not allowed in Datalog, since the variable Y
appears in the head of the clause, but not the body.
(There are also some additional restrictions when we allow negation.
Prolog under a database perspective. Prolog is interesting
for students and researchers in the database area for a number of reasons:
- Prolog itself may serve as a prototype database language as relational algebra can be expressed
directly, including tabular relations, views and integrity constraints. - Prolog is a programming language that is very easy to learn and in which it is fairly straightforward to implement and experiment with different database functionalities.
- Prolog is based on first-order logic so that programs have a clear semantics that can refer to
a large body of knowledge and tradition. - It represents live logic in a way that is much more appealing than a myriad of Greek letters
and strange symbols on paper. And a good supplement to or a starting point for the one who
wants to dig into the database literature which is inherently loaded with logic.
We do not, in general, advocate to use Prolog for storing really large amounts of data, as traditional
database systems are much better for this purpose. The real advantage of Prolog in this context is
as an experimental tool, which can give a clearer understanding of underlying concepts and serve
as workbench for developing new methods in the shape of running prototypes. - It is much easier to
play with different variations of some advanced database functionality in a Prolog program than
doing so by modifying the source code of a big relational database system
https://groups.seas.harvard.edu/courses/cs152/2015sp/lectures/lec23-prolog.pdf
Conclusion:
LISP and Prolog are indeed Forerunners of Artificial Intelligence Languages. Yet, LISP has some significant differences in Complier vs Interrupters focused topics upon. And Lamda Calculus Functionality gives LISP its trademark in Logical Programming. These were the Key factors discussed in the Study.
Reference:
https://groups.seas.harvard.edu/courses/cs152/2015sp/lectures/lec23-prolog.pdf