Write up on Discrete Mathmathics Logic, Tree, and Lambda Equations could be utilized in Highschool Computer Science Curriculum

Discrete Mathmatics

Signifinace of the Study

The key aspect to rooted trees — which is both their greatest advantage and greatest limitation — is that every node has one and only one path to the root. This behavior is inherited from free trees: as we noted, every node has only one path to every other.

Trees have a myriad of applications. Think of the files and folders on your hard drive: at the top is the root of the filesystem (perhaps “/” on Linux/Mac or “C:\\” on Windows) and underneath that are named folders. Each folder can contain files as well as other named folders, and so on down the hierarchy. The result is that each file has one, and only one, distinct path to it from the top of the filesystem. The file can be stored, and later retrieved, in exactly one way.

An “org chart” is like this: the CEO is at the top, then underneath her are the VP’s, the Directors, the Managers, and finally the rank-and-file employees. So is a military organization: the Commander in Chief directs generals, who command colonels, who command majors, who command captains, who command lieutenants, who command sergeants, who command privates.

The human body is even a rooted tree of sorts: it contains skeletal, cardiovascular, digestive, and other systems, each of which is comprised of organs, then tissues, then cells, molecules, and atoms. In fact, anything that has this sort of part-whole containment hierarchy is just asking to be represented as a tree.

In computer programming, the applications are too numerous to name. Compilers scan code and build a “parse tree” of its underlying meaning. HTML is a way of structuring plain text into a tree-like hierarchy of displayable elements. AI chess programs build trees representing their possible future moves and their opponent’s probable responses, in order to “see many moves ahead” and evaluate their best options. Object-oriented designs involve “inheritance hierarchies” of classes, each one specialized from a specific other. Etc. Other than a simple sequence (like an array), trees are probably the most common data structure in all of computer science.

Rooted tree terminology

root.

The node at the top of the tree, which is A

 in our example. Note that unlike trees in the real world, computer science trees have their root at the top and grow down. Every tree has a root except the empty tree, which is the “tree” that has no nodes at all in it. (It’s kind of weird thinking of “nothing” as a tree, but it’s kind of like the empty set ∅∅, which is still a set.)

parent.

Every node except the root has one parent: the node immediately above it. D’s parent is C, C’s parent is B, F’s parent is A, and A has no parent.

child.

Some nodes have children, which are nodes connected directly below it. A’s children are F and B, C’s are D and E, B’s only child is C, and E has no children.

sibling.

A node with the same parent. E’s sibling is D, B’s is F, and none of the other nodes have siblings.

ancestor.

Your parent, grandparent, great-grandparent, etc., all the way back to the root. B’s only ancestor is A, while E’s ancestors are C, B, and A. Note that F is not C’s ancestor, even though it’s above it on the diagram: there’s no connection from C to F, except back through the root (which doesn’t count).

descendent.

Your children, grandchildren, great-grandchildren, etc., all the way the leaves. B’s descendents are C, D and E, while A’s are F, B, C, D, and E.

leaf.

A node with no children. F, D, and E are leaves. Note that in a (very) small tree, the root could itself be a leaf.

internal node.

Any node that’s not a leaf. A, B, and C are the internal nodes in our example.

depth (of a node).

A node’s depth is the distance (in number of nodes) from it to the root. The root itself has depth zero. In our example, B is of depth 1, E is of depth 3, and A is of depth 0.

height (of a tree).

A rooted tree’s height is the maximum depth of any of its nodes; i.e., the maximum distance from the root to any node. Our example has a height of 3, since the “deepest” nodes are D and E, each with a depth of 3. A tree with just one node is considered to have a height of 0. Bizarrely, but to be consistent, we’ll say that the empty tree has height -1! Strange, but what else could it be? To say it has height 0 seems inconsistent with a one-node tree also having height 0. At any rate, this won’t come up much.

level.

All the nodes with the same depth are considered on the same “level.” B and F are on level 1, and D and E are on level 3. Nodes on the same level are not necessarily siblings. If F had a child named G in the example diagram, then G and C would be on the same level (2), but would not be siblings because they have different parents. (We might call them “cousins” to continue the family analogy.)

subtree.

Finally, much of what gives trees their expressive power is their recursive nature. This means that a tree is made up of other (smaller) trees. Consider our example. It is a tree with a root of A. But the two children of A are each trees in their own right! F itself is a tree with only one node. B and its descendents make another tree with four nodes. We consider these two trees to be subtrees of the original tree. The notion of “root” shifts somewhat as we consider subtrees — A is the root of the original tree, but B is the root of the second subtree. When we consider B’s children, we see that there is yet another subtree, which is rooted at C. And so on. It’s easy to see that any subtree fulfills all the properties of trees, and so everything we’ve said above applies also to it.

Binary trees (BT’s)

The nodes in a rooted tree can have any number of children. There’s a special type of rooted tree, though, called a binary tree which we restrict by simply saying that each node can have at most two children. Furthermore, we’ll label each of these two children as the “left child” and “right child.” (Note that a particular node might well have only a left child, or only a right child, but it’s still important to know which direction that child is.)

The left half of is a binary tree, but the right half is not (C has three children). A larger binary tree (of height 4) is shown in  .

Traversing binary trees

There were two ways of traversing a graph: breadth-first, and depth-first. Curiously, there are three ways of traversing a tree: pre-orderpost-order, and in-order. All three begin at the root, and all three consider each of the root’s children as subtrees. The difference is in the order of visitation.

Figure 1-2

Rules in Prolog

Using rules, we can build a relationships among facts.

% Define what a mother, father, and a grandparent is

mother(M, C) :- parent(M, C), female(M).

father(F, C) :- parent(F, C), male(F).

grandparent(X, Z) :- parent(X, Y), parent(Y, Z).

The rule named mother defines the meaning of the mother relationship: If M is a parent of C, and if M is female, then M is a mother.

Similarly, the father rule defines the meaning of the father relationship: If F is a parent of C, and if F is male, then F is a father.

Lastly, the grandparent rule defines the meaning of the grandparent relationship: if X is a parent of Y, and Y is a parent of Z, then X is a grandparent of Z.

Rules in Prolog

Using rules, we can build a relationships among facts.

% Define what a mother, father, and a grandparent is

mother(M, C) :- parent(M, C), female(M).

father(F, C) :- parent(F, C), male(F).

grandparent(X, Z) :- parent(X, Y), parent(Y, Z).

The rule named mother defines the meaning of the mother relationship: If M is a parent of C, and if M is female, then M is a mother.

Similarly, the father rule defines the meaning of the father relationship: If F is a parent of C, and if F is male, then F is a father.

Lastly, the grandparent rule defines the meaning of the grandparent relationship: if X is a parent of Y, and Y is a parent of Z, then X is a grandparent of Z.

                                                Logical Programming

Logic is the discipline concerned with unassailably valid reasoning. By valid we mean that if we start with true statements and from them deduce new statements, following the given logical laws of deduction, we will always end up with new statements that are also true. Logics are thus systems of symbols and rules for manipulating them that have the property that syntactic deduction, involving mechanical manipulation of strings, is always a semantically valid operation, involving the truths of derived statements.

There is not just one logic. There are many. First-order predicate logic with equality is central to everyday mathematics. Propositional logic is equivalent to the language of Boolean expressions as found in conditional expressions in most programming languages. Temporal logics provide ways to reason about what statements remain true in evolving worlds. Dependent type theory is a logic, a richer form of predicate logic, in which propositions are formalized as types and proofs are essentially programs and data structures written in pure, typed, functional programming languages, and so can be type checked for correctness.

Logic is a pillar of computer science. It has been said that logic is to computer science as calculus is to natural science and engineering. As scientists and engineers use everyday mathematics to represent and reason about properties of physical things, so computer scientists use various logical languages to specify and reason about properties of programs, algorithms, the states of programs as they execute, problems to be solved by algorithms, and even about the real world in which software is meant to operate.

Propositional logic , essentially the language of Boolean expressions, is ubiquitous in programming. First-order predicate logic is widely used to reason about many issues that arise in computer science, from the complexity of algorithms to the correctness of programs. Hoare logic is a specialized extension of first-order predicate logic that is especially well suited to specifying how programs must behave and for showing that they do behave according to given logical specifications. Dependent type theory is the logic of modern proof assistant tools, including Lean (as well as Coq and Agda), which we will use in this class.

Dependent type theory and the tools that support it now play important roles in both the development of programming languages and in the production of trustworthy software. In lieu of testing of a given computer program to see if it works correctly on some inputs, one proves that it works correctly on all possible inputs. A tool then checks such proofs for correctness. Mathematicians are increasingly interested in the possibilities for automated checking of complex proofs as well.

At the heart of logic are the concepts of propositions and proofs. A proposition is an expression that we interpret as making a claim that some particular state of affairs holds in some particular domain of discourse (some world of interest). A proof is a compelling argument, in a precise form, that shows beyond any doubt that such a proposition is true. The existence of a valid proof of a proposition shows that it is true. In mathematical logic, the arbiter of truth is the existence of a proof. Proof implies truth; truth demands proof.

This first section of this course, on logic, provides a rigorous survey of forms of propositions and the construction and use of corresponding proofs in the language of predicate logic. You will learn how to write propositions in predicate logic, and how to both construct and use proofs of them in a logical reasoning system called natural deduction. As you will also quickly see, natural deduction is essentially a form of computation in a typed, pure functional programming language in which all programs terminate (there can be no infinite loops). To learn logic in this style is thus also learn a very important style of programming: functional programming. You will learn to compute with propositions and proofs.

 propositions need not be a true statement. Propositions only need to be declarative. Their truth value may be true or false. However, all propositions must have a particular truth value. The statement cannot be both true and false. The statement must be able to be interpreted as true or false.

From the previous definition and examples, propositions are therefore not questions, general statements, demands, or hypotheses. Propositions do not have any variables, quantifiers, or parameters (e.g. the words “some” or “any” typically do not appear). Consider now a few non-examples.

Examples of statements that are not propositions

  • Do you have a dog?
  • Let’s go!
  • Some coffee mug with a mermaid on it.
  • x+2=3
  • y=x2−1

Checkpoint

Are each of these propositions?

  1. I am a dolphin.
  2. Supercalifragilisticexpialidocious.
  3. Jupiter is the 5th planet from the sun.
  4. On Thursdays, van Gogh painted landscapes.
  5. 11+56∗3−819=9

Solution

Constructing Propositions

An entire proposition is often denoted by a single propositional variable. Propositional variables are typically among p,q,r,s,t,….

Using propositional variables

p:= “The sky is blue”

q:= “The sun rises from the west”

We also denote truth values in particular ways. “True” may be denoted by T. “False” may be denoted by F. When a proposition (or proposition variable) is known to always be true, we can replace it by T. When a proposition (or proposition variable) is known to always be false, we can replace it by F.

Connectives

We can combine propositions (and propositional variables) into compound propositions or propositional formulas. This is akin to compound sentences and other logical connectives in natural language.

In propositional logic, we have 5 main connectives. Each connective has a corresponding meaning in natural language as we will soon see.

  • Negation: ¬
  • Conjunction: ∧
  • Disjunction: ∨
  • Implication: →
  • Biconditional: ↔

Logical connectives are like arithmetic operators (+,−,×,÷).

Negation

The negation of a proposition results in a proposition with the opposite truth value. It is akin to adding “not” into a sentence, or starting a sentence with “it is not that case that…”.

Given a proposition p its negation is ¬p and has the following truth values.

Table 1.1 Negation truth table
p¬p
FT
TF

Negation

  • Let p:= “the sky is blue”.

¬p is “the sky is not blue” or “it is not the case that the sky is blue”.

  • Let q:= “2+2=5”.

¬q is “2+2≠5.

Notice in these examples that negation does not necessarily make a proposition false. Rather, it makes the proposition have the opposite truth value.

Conjunction

The conjunction of two propositions is the logical “and” of the two propositions. The conjunction of two proposition is only true if both the propositions are individually true, otherwise the conjunction is false.

Given proposition p and q their conjunction is denoted p∧q and has the following truth values.

Table 1.2 Conjunction truth table
pqp∧q
FFF
FTF
TFF
TTT

Conjunction

Let p be “birds lay eggs” and q be “my eyes are blue”. p∧q is then “birds lay eggs and my eyes are blue”.

Disjunction

The disjunction of two propositions is the “or” of the two propositions. The disjunction is true if at least one of the propositions is individually true, otherwise the disjunction is false.

Given proposition p and q their disjunction is denoted p∨q and has the following truth values.

Table 1.3 Disjunction truth table
pqp∨q
FFF
FTT
TFT
TTT

Disjunction

Let p be “it is raining” and q be “I am wearing sunglasses”. p∨q is then “it is raining or I am wearing sunglasses”.

Notice that in this previous example, it is may be true that it is both raining and that I am wearing sunglasses. While that may be silly, p∨q is still true! In logic, we only require that at least one of the propositions in a disjunction is true. That means both are allowed to simultaneously be true.

Caution

In natural language, “or” is often interpreted as an exclusive or.

Language “or”

“You can have a cookie or a piece of cake.” Most people assume that this means you can have a cookie or a piece of cake, but not both.

In logic, “or” is not exclusive. You can have a cookie, a piece of cake, or both!

If you want logical exclusive or, we use the symbol ⊕. However, we will not use that in this course.


Checkpoint

What is the truth value of these compound propositions?

  1. “The earth is round and the sky is blue.”
  2. “Dogs or cats make great pets.”
  3. “It is 20∘ Celsius outside and it is snowing.”
  4. “Lemons are purple or grass is green”

Solution


Implication

Implication is one of the most challenging connectives to understand. Yet it is arguably the most important for creating logical arguments).

An implication is a conditional statement. For two propositions p and q, p→q is an implication which is read “if p, then q”. You can also say “p implies q”.

Implication

Let p be “it is raining” and q be “the ground is wet”. p→q can be read “if it is raining, then the ground is wet”.

In an implication p→q, the first proposition p is known as the hypothesisantecedent, or premise. The second proposition q is known as the conclusion or consequence.

Because an implication is a conditional, the truth value of the implication as a whole changes depending on the truth value of the premise. The following truth table summarizes the truth values of an implication.

Table 1.4 Implication truth table
pqp→q
FFT
FTT
TFF
TTT

An implication can be viewed as an obligation, a contract, or a commitment. The implication p→q is false (the contract is broken; the obligation is unmet) only when p is true and q is false.

There are several important observations from this truth table about logical implication.

  • If q is true, then p→q is always true.
  • If p is true and the implication correct (the obligation is upheld), then q can never be false.
  • “Falsity can imply anything.” If the hypothesis is false, then the implication is always true, regardless of the whether or not the conclusion is true.

Some of these observations may seem counter-intuitive at first. Let us clarify with some examples.

The truth value of implications

Let p be “that animal is a panda bear” and q be “that animal is black and white”. p→q can be read as “if that animal is a panda bear, then that animal is black and white”.

If p is true, and that animal is indeed a panda bear (and the implication is correct),
then it is also black and white. If q is true, and the animal is black and white, it might be a panda bear, but it might also be a cow.

From p→q, we can say that knowing the animal is a panda bear is sufficient to know that the animal is black and white.

Valid implications can be formed from completely unrelated propositions. Moreover, if you begin with a nonsensical hypothesis, then one can construct valid (but equally nonsensical) implications. Falsity implies anything.

Absurd but valid implications

  • “If pigs can fly, then I am the pope.”
  • “If 2+2=5, then lemons are purple.”
  • “If the sun is made of ice, then my father is Morgan Freeman”.

There are many equivalent ways to think about the implication p→q.

  • If p, then q
  • p implies q
  • q when p
  • q, if p
  • q whenever p
  • q follows from p
  • p is sufficient for q
  • q is necessary for p

Necessity and Sufficiency

An implication connects propositions by a necessary or sufficient condition. From p→q we get two relations:

  • p is sufficient for q
  • q is necessary for p

That is, “if sufficient condition, then necessary condition”.

Necessary and Sufficient

“If all birds have feathers, then a chicken is a type of bird.”

Knowing birds have feathers is sufficient information to conclude that a chicken is a type of bird. If a chicken is a type of bird, then chickens necessarily have feathers.

Fig. 1.1 Being in the inner circle is sufficient for being in the outer circle. Being in the outer circle is necessary for being in the inner circle.

Biconditional

For two propositions p and q, they can be connected by a biconditional as p↔q.

A biconditional is an double implication. A biconditional is true if both propositions have the same truth value. p↔q can be read as “p if and only if q”. A biconditional has the following truth table.

Table 1.5 Biconditional truth table
pqp↔q
FFT
FTF
TFF
TTT

The biconditional p↔q can be expressed in many ways:

  • “p if and only if q”
  • “if p then q, and if q then p”
  • “p is necessary and sufficient for q”
  • “p iff q”

Biconditional

Let p be “2 is an even number”. Let q be “4 is an even number”. p↔q is a biconditional and its truth value is true, since both p and q are true.

Tip (thinking in memes)

“The venn diagram is a circle” exactly means that the two subjects form a biconditional.

Checkpoint

What is the truth value of these compound propositions?

  1. “The Earth is flat” → “Pigeons are robots”
  2. “Bats have wings” → “Bats are birds”
  3. “A square is a rectangle” ↔ “A square had four 90∘ interior angles”
  4. “Spinach is green” ↔ “Penguins can fly”

Solution


Propositional Formulas

In the previous section we saw 5 different logical connectives: ¬, ∧, ∨, →, and ↔. Much like arithmetic formulas using addition, multiplication, division, etc., propositional formulas may use several connectives simultaneously.

Remember BEDMAS or PEDMAS? Now we have “PaNCo DIB” (“Panko Dib”)?

For logical connectives we have a similar order of precedence.

  1. Parenthesis: always perform operations on expressions inside parentheses first.
  2. Negation: apply negation to a proposition before binary connectives.
  3. Conjunction: conjunction before disjunction
  4. Disjunction: disjunction after conjunction, but before implication
  5. Implication: → after ∧, ∨
  6. Biconditional: ↔ after ∧,∨,→.

Logical order of precendence

  • p∨q→¬r   is the same as   (p∨q)→(¬r)
  • p∨¬q∧r   is the same as   p∨( (¬q) ∧r)

Propositional variables need not be associated with a particular proposition or truth value. A propositional variable could be just that: a variable. Replacing the variables in a propositional formula with a truth value is called a truth assignment.

Definition (truth assignment)

truth assignment is the assignment of a truth value (true or false) to a propositional variable. Equally, it is the replacement of a propositional variable with a truth value.

Much like logical connectives, propositional formulas will result in different truth values depending on the particular truth assignment on its consituent propositional variables. When at least one truth assignment exists so that a formula is true, that formula is said to be satisfiable.

Definition (satisfiable)

A propositional formula is satisfiable if its truth value can be true under some truth assignment. If every possible truth assignment makes the formula have false as its truth value, that formula is said to be unsatisfiable.

In order to determine the truth value of a propositional formula, and to determine if it is satisfiable, we can create a truth table.


Truth Tables

Truth tables are tools for determining the truth values of propositional formulas.

  • The table is separated into two sets of columns:
    • The first set of columns represent each proposition (or propositional variable) in a formula.
    • The second set of columns represents the sub-formulas and formulas whose truth values are to be determined.
  • There must be one row in the table for every possible combination of truth values of the propositional variables. For example, in a formula with two variables, the possible combinations are: (T,T),(T,F),(F,T),(F,F).

3-variable truth table

Let p,q,r be propositional variables. A truth table for the formula (p∧q)∨r is:

pqrp∧q(p∧q)∨r
FFFFF
FFTFT
FTFFF
FTTFT
TFFFF
TFTFT
TTFTT
TTTTT

Notice that every possible combination of truth values for p, q, and r is contained in this table. Since at least one choice of truth value for p, q, and r results in the formula being true, then this formula is satisfiable.

In a truth table, you begin by filling out the columns corresponding to each propositional variable. These columns represent every possible combination of truth values on those variables. Then, you add columns for each sub-formula, one at a time, building up to the final formula.

Consider the formula p∧q∧r ∨ ¬q∧r→p. By order of precendence, this is equal to ( (p∧q∧r) ∨ ((¬q)∧r) )→p This contains several sub-formulas which we can parse:

  • ¬q
  • ¬q∧r
  • p∧q
  • (p∧q)∧r
  • (p∧q∧r)∨(¬q∧r)
  • ( (p∧q∧r)∨(¬q∧r) )→p

To be as explicit as possible, we could create a truth table with 3 + 6 = 9 columns (3 variables, 6 sub-formulas). But this is excessive. For example, we could directly compute (¬q∧r) and (p∧q∧r). This gives the following truth table.

A large truth table

A truth table for the propositional formula p∧q∧r ∨ ¬q∧r→p.

pqrp∧q∧r¬q∧r(p∧q∧r)∨(¬q∧r)(p∧q∧r)∨(¬q∧r)→p
FFFFFFT
FFTFTTF
FTFFFFT
FTTFFFT
TFFFFFT
TFTFTTT
TTFFFFT
TTTTFTT

Checkpoint

Construct a truth table

Give a truth table for the propositional formula p∧r→q∨¬r

Solution


Implication, Inverse, Converse, and Contrapositive

Now that we have seen propositional formulas and truth tables, let’s revisit implications. This connective has many related conditionals.

Consider the propositional formula p→q. Then, we have:

  • Converse: q→p
  • Inverse: ¬p→¬q
  • Contrapositive: ¬q→¬p

A conditional and its inverse

The proposition “if it is raining, then I wear a jacket” is a conditional statement. Its inverse is “if it is not raining I do not wear a jacket”.

Notice from this previous example than an implication and its inverse are not exactly the same. If the conditional “if it is raining, then I wear a jacket” is true, that is not the same as its inverse. Indeed, you might still wear a jacket even if its not raining. Maybe you’re just cold.

Important

An implication is not equivalent to its converse or inverse. However, it is equivalent to its contrapositive.


 Logical laws are similar to algebraic laws. For example, there is a logical law corresponding to the associative law of addition, a+(b+c)=(a+b)+c.a+(b+c)=(a+b)+c. In fact, associativity of both conjunction and disjunction are among the laws of logic. Notice that with one exception, the laws are paired in such a way that exchanging the symbols ∧,∧, ∨,∨, 1 and 0 for ∨,∨, ∧,∧, 0, and 1, respectively, in any law gives you a second law. For example, p∨0⇔pp∨0⇔p results in p∧1⇔p.p∧1⇔p. This is called a duality principle. For now, think of it as a way of remembering two laws for the price of one. We will leave it to the reader to verify a few of these laws with truth tables. However, the reader should be careful in applying duality to the conditional operator and implication since the dual involves taking the converse. For example, the dual of p∧q⇒pp∧q⇒p is p∨q⇐p,p∨q⇐p, which is usually written p⇒p∨q.p⇒p∨q.

Example : Verification of an Identity Law

The Identity Law can be verified with this truth table. The fact that (p∧1)↔p(p∧1)↔p is a tautology serves as a valid proof.

Table 3.4.13.4.1: Truth table to demonstrate the identity law for conjunction.

pp11p∧1p∧1(p∧1)↔p(p∧1)↔p
00110011
11111111

Some of the logical laws in Table 3.4.33.4.3 might be less obvious to you. For any that you are not comfortable with, substitute actual propositions for the logical variables. For example, if pp is “John owns a pet store” and qq is “John likes pets,” the detachment law should make sense.

Table 3.4.23.4.2: Basic Logical Laws – Equivalences

Commutative Laws
p∨q⇔q∨pp∨q⇔q∨pp∧q⇔q∧pp∧q⇔q∧p
Associative Laws
(p∨q)∨r⇔p∨(q∨r)(p∨q)∨r⇔p∨(q∨r)(p∧q)∧r⇔p∧(q∧r)(p∧q)∧r⇔p∧(q∧r)
Distributive Laws
p∧(q∨r)⇔(p∧q)∨(p∧r)p∧(q∨r)⇔(p∧q)∨(p∧r)p∨(q∧r)⇔(p∨q)∧(p∨r)p∨(q∧r)⇔(p∨q)∧(p∨r)
Identity Laws
p∨0⇔pp∨0⇔pp∧1⇔pp∧1⇔p
Negation Laws
p∧¬p⇔0p∧¬p⇔0p∨¬p⇔1p∨¬p⇔1
Idempotent Laws
p∨p⇔p∨p⇔p∧p⇔pp∧p⇔p
Null Laws
p∧0⇔0p∧0⇔0p∨1⇔1p∨1⇔1
Absorption Laws
p∧(p∨q)⇔pp∧(p∨q)⇔pp∨(p∧q)⇔pp∨(p∧q)⇔p
DeMorgan’s Laws
¬(p∨q)⇔(¬p)∧(¬q)¬(p∨q)⇔(¬p)∧(¬q)¬(p∧q)⇔(¬p)∨(¬q)¬(p∧q)⇔(¬p)∨(¬q)
Involution Laws
¬(¬p)⇔p¬(¬p)⇔p

Table 3.4.33.4.3: Basic Logical Laws – Common Implications and Equivalences

Detachment (AKA Modus Ponens)(p→q)∧p⇒q(p→q)∧p⇒q
Indirect Reasoning (AKA Modus Tollens)(p→q)∧¬q⇒¬p(p→q)∧¬q⇒¬p
Disjunctive Additionp⇒(p∨q)p⇒(p∨q)
Conjunctive Simplification(p∧q)⇒p(p∧q)⇒p and (p∧q)⇒q(p∧q)⇒q
Disjunctive Simplification(p∨q)∧¬p⇒q(p∨q)∧¬p⇒q and (p∨q)∧¬q⇒p(p∨q)∧¬q⇒p
Chain Rule(p→q)∧(q→r)⇒(p→r)(p→q)∧(q→r)⇒(p→r)
Conditional Equivalencep→q⇔¬p∨qp→q⇔¬p∨q
Biconditional Equivalences(p↔q)⇔(p→q)∧(q→p)⇔(p∧q)∨(¬p∧¬q)(p↔q)⇔(p→q)∧(q→p)⇔(p∧q)∨(¬p∧¬q)
Contrapositive(p→q)⇔(¬q→¬p)

Lambda

Function terms, called lambda abstractions, are literal expressions that represent mathematical functions. Yes, you can and should now think of functions as being values, too. Function definitions are terms in predicate logic and in functional programming languages. As we will see later on, we can pass functions as arguments to other functions, and receive them as results. Functions that take functions as arguments or that return functions as results are called higher-order functions. We will get to this topic later on.

Consider the simple lambda expression, \lambda n : nat, n + 1. It’s a term that represents a function that takes one argument, n, of type, nat. When applied to an actual parameter, or argument, it returns the value of that argument plus one.

Here’s the example in Lean. It first shows that the literal function expression reduces to the value that it represents directly. It then shows that this function can be applied to an actual parameter, i.e., an argument, 5, reducing to the value 6. Third, it shows that a literal function term can be bound to an identifier, here f, and finally that this makes it easier to write code to apply the function to an argument.

#reduce (λ n : ℕ, n + 1)

#eval (λ n : ℕ, n + 1) 5

def f := (λ n : ℕ, n + 1)

#eval f 5

Lambda calculus terms can be viewed as a kind of binary tree. A lambda calculus term consists of:

  • Variables, which we can think of as leaf nodes holding strings.
  • Applications, which we can think of as internal nodes.
  • Lambda abstractions, which we can think of as a special kind of internal node whose left child must be a variable. (Or as a internal node labeled with a variable with exactly one child.)

It’s slightly annoying to have to define a helper procedure add-three just so we can use it as the argument to every. We’re never going to use that procedure again, but we still have to come up with a name for it. We’d like a general way to say “here’s the function I want you to use” without having to give the procedure a name. In other words, we want a general-purpose procedure-generating procedure!

Lambda is the name of a special form that generates procedures. It takes some information about the function you want to create as arguments and it returns the procedure. It’ll be easier to explain the details after you see an example.

(define (add-three-to-each sent)

  (every (lambda (number) (+ number 3)) sent))

> (add-three-to-each ‘(1 9 9 2))

(4 12 12 5)

The first argument to every is, in effect, the same procedure as the one we called add-three earlier, but now we can use it without giving it a name. (Don’t make the mistake of thinking that lambda is the argument to every. The argument is the procedure returned by lambda.)

Perhaps you’re wondering whether “lambda” spells something backward. Actually, it’s the name of the Greek letter L, which looks like this: λ. It would probably be more sensible if lambda were named something like make-procedure, but the name lambda is traditional.[1]

Creating a procedure by using lambda is very much like creating one with define, as we’ve done up to this point, except that we don’t specify a name. When we create a procedure with define, we have to indicate the procedure’s name, the names of its arguments (i.e., the formal parameters), and the expression that it computes (its body). With lambda we still provide the last two of these three components.

As we said, lambda is a special form. This means, as you remember, that its arguments are not evaluated when you invoke it. The first argument is a sentence containing the formal parameters; the second argument is the body. What lambda returns is an unnamed procedure. You can invoke that procedure:

> ((lambda (a b) (+ (* 2 a) b)) 5 6)

16

> ((lambda (wd) (word (last wd) (first wd))) ‘impish)

It’s slightly annoying to have to define a helper procedure add-three just so we can use it as the argument to every. We’re never going to use that procedure again, but we still have to come up with a name for it. We’d like a general way to say “here’s the function I want you to use” without having to give the procedure a name. In other words, we want a general-purpose procedure-generating procedure!

Lambda is the name of a special form that generates procedures. It takes some information about the function you want to create as arguments and it returns the procedure. It’ll be easier to explain the details after you see an example.

(define (add-three-to-each sent)

  (every (lambda (number) (+ number 3)) sent))

> (add-three-to-each ‘(1 9 9 2))

(4 12 12 5)

The first argument to every is, in effect, the same procedure as the one we called add-three earlier, but now we can use it without giving it a name. (Don’t make the mistake of thinking that lambda is the argument to every. The argument is the procedure returned by lambda.)

Perhaps you’re wondering whether “lambda” spells something backward. Actually, it’s the name of the Greek letter L, which looks like this: λ. It would probably be more sensible if lambda were named something like make-procedure, but the name lambda is traditional.[1]

Creating a procedure by using lambda is very much like creating one with define, as we’ve done up to this point, except that we don’t specify a name. When we create a procedure with define, we have to indicate the procedure’s name, the names of its arguments (i.e., the formal parameters), and the expression that it computes (its body). With lambda we still provide the last two of these three components.

As we said, lambda is a special form. This means, as you remember, that its arguments are not evaluated when you invoke it. The first argument is a sentence containing the formal parameters; the second argument is the body. What lambda returns is an unnamed procedure. You can invoke that procedure:

> ((lambda (a b) (+ (* 2 a) b)) 5 6)

> ((lambda (wd) (word (last wd) (first wd))) 'impish)

Searching algorithms are used to find a specified element within a data structure. For example, a searching algorithm could be used to find the name “Alan Turing” in an array of names. Numerous different searching algorithms exist, each of which is suited to a particular data structure of format of data. Different searching algorithms are used depending on each individual scenario. Binary Search The binary search algorithm can only be applied on sorted data and works by finding the middle element in a list of data before deciding which side of the data the desired element is to be found in. The unwanted half of the data is then discarded and the process repeated until the desired element is found or until it is known that the desired element doesn’t exist in the data.

 Pseudocode for binary search is shown below. A ​ = Array of data x ​ = Desired element low = 0  high = A.length -1  while low <= high:  mid = (low + high) / 2  if A[mid] == x:  return mid  else if

A[mid] > x:  high = mid -1  else:  low = mid + 1  endif  endwhile  return

 “Not found in data”  With each iteration of binary search, half of the input data is discarded, making the algorithm very efficient. In fact, the time complexity of binary search is O(log n)​ .

www.pmt.education Example 1 Find the location of “Dylan” in the data below.

 0  Alice

1 Bob

2  Charlie

3 Dylan

5 Ellie

 6  Franz

Gabbie Hugo Ingrid To start with, our values for high and low and 8 and 0 respectively. The first step is to find the middle position.

 In this case, it’s (0 + 8) / 2 = ​4​ . Next we inspect the data in position 4: Ellie. This is higher than the desired data and so we discard elements 4-8, setting high as 3. 0  Alice 1  2  3  4  5  6  7  8  Bob Charlie Dylan Ellie Franz Gabbie Hugo Ingrid Again, we calculate the value of the middle position.

This time it’s (0 + 3) / 2 = ​2 (We have to round to the nearest whole number). Inspecting the data at this position we find Charlie, which is lower than the. This breaks the condition of low being less than or equal to high, and so breaks the while loop in the pseudocode. The algorithm then returns “Not found in data” before terminating. Example 3 Look at how the algorithm finds the letter R in the first 20 characters of the alphabet. It’s clear from this example that the algorithm halves the remaining data to be searched with each iteration, gradually reducing the size of the problem to be solved. A B C D E F G H I J K L M N O P Q R S T  A B C D E F G H I J K L M N O P Q R S T  A B C D E F G H I J K L M N O P Q R S T  A B C D E F G H I J K L M N O P Q R S T 

.pmt.education

Linear Search Linear search is the most basic searching algorithm. You can think of it as going along a bookshelf one by one until you come across the book you’re looking for. Sometimes the algorithm gets lucky and finds the desired element almost immediately, while in other situations, the algorithm is incredibly inefficient.

 It’s time complexity is O(n)​ . There’s a lot of pot luck involved with linear search, but it’s incredibly easy to implement. Unlike binary search, linear search doesn’t require the data to be sorted. A ​ = Array of data x ​ = Desired element i = 0  while i < A.length:  if A[i] == x:  return i  else:  i = i + 1  endif  endwhile  return “Not found in data”  Example 1

 Find the position of Apple in the data. 0  Banana

1  Orange

2  3  4  Apple

Kiwi First we inspect position 0,

 and find Banana.

Not the element we’re after.

0  1  2  3  Mango

 4  Banana

Orange

 Apple

 Kiwi

Next we inspect position 1, finding Orange.

Again, not what we’re looking for. 0  1  2  3 

www.pmt.education Example 2 Look at how this algorithm finds the letter R in the first 20 characters of the alphabet and compare it to binary search above. It’s clear from this example that the algorithm is much less efficient than binary search, particularly when the search value is towards the end of the data to be searched.

A B C D E F G H I J K L M N O P Q R S T

 A B C D E F G H I J K L M N O P Q R S T

A B C D E F G H I J K L M N O P Q R S T

A B C D E F G H I J K L M N O P Q R S T

A B C D E F G H I J K L M N O P Q R S T

A B C D E F G H I J K L M N O P Q R S T

 A B C D E F G H I J K L M N O P Q R S T

 A B C D E F G H I J K L M N O P Q R S T

 A B C D E F G H I J K L M N O P Q R S T

A B C D E F G H I J K L M N O P Q R S T

A B C D E F G H I J K L M N O P Q R S T

 A B C D E F G H I J K L M N O P Q R S T

 A B C D E F G H I J K L M N O P Q R S T

A B C D E F G H I J K L M N O P Q R S T

 A B C D E F G H I J K L M N O P Q R S T

 A B C D E F G H I J K L M N O P Q R S T

A B C D E F G H I J K L M N O P Q R S T A B C D E F G H I J K L M N O P Q

Leave a Comment

Your email address will not be published. Required fields are marked *