Unit II
Unit II
representation.
There are two types of Propositions:
o Atomic Propositions: Atomic propositions are the simple propositions. It
consists of a single proposition symbol. These are the sentences which must be
either true or false.
Examples:
a) 2+2 is 4, it is an atomic proposition as it is a true fact.
b) "The Sun is cold" is also a proposition as it is a false fact.
o Compound propositions: Compound propositions are constructed by
combining simpler or atomic propositions, using parenthesis and logical
connectives.
Examples:
a) "It is raining today, and street is wet."
b) "Ankit is a doctor, and his clinic is in Mumbai."
Logical connectives are used to connect two simpler propositions or representing a
sentence logically. We can create compound propositions with the help of logical
connectives. There are mainly five connectives, which are given as follows:
o Negation: A sentence such as ¬ P is called negation of P. A literal can be
either Positive literal or negative literal.
o Conjunction: A sentence which has ∧ connective such as, P ∧ Q is called a
conjunction.
Example: Rohan is intelligent and hardworking. It can be written as,
P= Rohan is intelligent,
Q= Rohan is hardworking. → P∧ Q.
o Disjunction: A sentence which has ∨ connective, such as P ∨ Q. is called
disjunction, where P and Q are the propositions
Example: "Ritika is a doctor or Engineer"
Here P= Ritika is Doctor.
Q= Ritika is Doctor, so we can write it as P ∨ Q.
o Implication: A sentence such as P → Q, is called an implication. Implications
are also known as if-then rules.
Example: If it is raining, then the street is wet. Let P= It is raining and
Q= Street is wet, so it is represented as: P → Q
o Biconditional: A sentence such as P⇔ Q is a Biconditional sentence,
Example: If I am breathing, then I am alive
P= I am breathing,
Q= I am alive, it can be represented as P ⇔ Q.
Truth Table:
T T F T T T T
T F T F T F F
F T F F T T F
F F T F F T T
A set of labeled formulae including the one above the line may be developed by
replacing that formula with the one or more formulae below the line. A vertical bar
separating two sets below the line, indicates that there are two successor nodes, so the
branch of the tableau splits into two at that point.
A contradiction in a tableau branch causes that branch to die: it gets closed off and no
more growth happens from its tip. If every branch dies, the tree dies; in that case we
know there was no way the tested formulas could all have had the truth values they
were initially supposed to have.
If, on the other hand, we get to a state in which every subformula has been processed
in every branch in which it occurs and some branch is left alive or "open", we have
found an interpretation on which all of the original formulas are true. Conventionally
we mark a dead or "closed" branch by writing a cross under it.
logical consequence of S if and only if the set S’ = S ⋃ {~C} is unsatisfiable, that is, a
contradiction (or an empty clause) is deduced from the set S’, assuming that initially
the set S is satisfiable
Representing Simple Facts in Logic
Propositional logic is appealing because it is simple to deal with and a decision
procedure for it exists.
We can easily represent real-world facts as logical propositions written as well
formed formulas (wff’s) in Propositional logic.
Some Simple Facts in Propositional logic:
o It is raining
RAINING
o It is sunny
SUNNY
o It is windy
WINDY
o If it is raining, then it is not sunny.
RAINING ¬SUNNY
Suppose we want to represent the fact stated by the sentence: Socrates is a man
We could write SOCRATESMAN. It is represented as:
MAN (SOCRATES)
Now the structure of the representation reflects the structure of the knowledge itself.
But to do that, we need to be able to use “predicates” applied to arguments.
To represent the following sentence: All men are mortal
We can’t represent this as: MORTALMAN
But that fails to capture the relationship between any individual being a man and that
individual being a mortal. To do that, we need variables and quantification.
So we use first-order predicate logic as a way of representing knowledge because it
permits representations of things that cannot be represented in propositional logic.
In predicate logic, we can represent real-world facts as statements written as wff's.
The major motivation for choosing to use logic is that it we use logical statements as a
way of representing knowledge, and then we have a good way of reasoning with that
knowledge.
Determining the validity of a proposition in propositional logic is straightforward,
although it may be computationally hard.
Before we use predicate logic as a medium for representing knowledge, we need to
check whether it also provides a good way of reasoning with the knowledge.
It provides a way of deducing new statements from old ones.
Unfortunately, unlike propositional logic, it does not possess a decision procedure.
There do exist procedures that will find a proof of a proposed theorem, but first-order
predicate logic is not decidable, it is semi-decidable.
A simple such procedure is to use the rules of inference to generate theorems from the
axioms in some orderly fashion.
This method is not particularly efficient, however, and we will want to try to find a
better one.
Use of predicate logic
Let’s now explore the use of predicate logic as a way of representing knowledge by looking
at a following example: Consider the following set of sentences:
1. Marcus was a man.
2. Marcus was a Pompeian.
3. All Pompeians were Romans.
4. Caesar was a ruler.
5. All Romans were either loyal to Caesar or hated him.
6. Everyone is loyal to someone.
7. People only try to assassinate rulers they are not loyal to.
8. Marcus tried to assassinate Caesar.
The facts described by these sentences can be represented as a set of wff’s in
predicate logic as follows:
1. Marcus was a man.
man (Marcus)
This representation captures the critical fact of Marcus being a man. It fails to capture
some of the information in the English sentence, the notion of past tense.
2. Marcus was a Pompeian.
Pompeian (Marcus)
3. All Pompeians were Romans.
∀(x): Pompeian(x) Roman(x)
4. Caesar was a ruler.
ruler (Caesar)
5. All Romans were either loyal to Caesar or hated him.
∀(x): Roman(x) loyalto(x, Caesar) V hate( x , Caesar)
6. Everyone is loyal to someone.
∀(x) : ∃(y): loyalto(x, y)
7. People only try to assassinate rulers they are not loyal to
∀(x) : ∃(y):person(x) 𝖠 ruler(y) 𝖠 tryassassinate(x, y) ¬loyalto(x, y)
8. Marcus tried to assassinate Caesar
tryassassinate(Marcus, Caesar)
Now suppose that we want to use these statements to answer the question:
Was Marcus loyal to Caesar?
Let's try to produce a formal proof, reasoning backward from the desired goal:
¬loyalto (Marcus, Caesar)
To prove this, let us add a fact:
All men are people
Although we know that Marcus was a man, we do not have any way to conclude that Marcus
was a person. So, we need therepresentation of another fact to our system,
∀(x):
Second representation: The predicate instance is a binary one, whose first argument
is an object and whose second argument is a class to which the object belongs. But
these representations do not use an explicit isa predicate.
In subclass relationships, such as that between Pompeians and Romans, the
implication rule there states that it an object is an instance of the subclass Pompeian
then it is an instance of the superclass Roman.
This rule is equivalent to the standard set-theoretic definition of the subclass-
superclass relationship.
Third representation: It contains representations that use both the instance and isa
predicates explicitly.
The use of the isa predicate simplifies the representation of sentence 3, but it
requires that one additional axiom. This additional axiom describes how an instance
relation and an isa relation can be combined to derive a new instance relation.
7. It is now 1991.
8. Alive means not dead.
9. If someone dies, then he is dead at all later times.
Now let's attempt to answer the question:
"Is Marcus alive?"
By proving:
¬ alive(Marcus, now)
The given facts are represented as follows:
Resolution:
Resolution is such a procedure, which gains its efficiency from the fact that it operates
on statements that have been converted to a very convenient standard form.
Resolution produces proofs by refutation. To prove a statement (to show that it is
valid), resolution attempts to show that the negation of the statement produces a
contradiction with the known statements (i.e.. that it is unsatisfiable).
Resolution is a technique for proving theorems in the propositional or predicate
calculus that has been a part of AI problem-solving.
Resolution describes a way of finding contradictions in a database of clauses with
o [¬Roman(X)V¬know(x,Marcus)]V[hate(x,Caesar)V(¬hate(y, z) V thinkcrazy
(x, y))]
7. Convert the matrix into a conjunction of disjuncts. Since there are no and's, it is only
necessary to exploit the associative property of (i.e., a⋁(b ⋁c) =(a ⋁b)⋁c and simply
remove the parentheses.
o ¬Roman(X)V¬know(x,Marcus)Vhate(x,Caesar)V (¬hate(y, z) V thinkcrazy(x,
y)
8. Create a separate clause corresponding to each conjunct. In order for a wff to be true,
all the clauses that are generated from it must be true. If we are going to be working
with several wff's, all the clauses generated by each of them can now be combined to
represent the same set of facts as were represented by the original wff's.
9. Standardize apart the variables in the set of clauses.
o (∀x:P(x)⋀Q(x))=∀x:P(x)⋀ ∀x:Q(x)
Since each clause is a separate conjunct and since all the variables are universally
quantified, there is no relationship between the variables of two clauses, even if they
were generated from the same wff.
The Basis of Resolution
The resolution procedure is a simple iterative process: at each step two clauses,
called the parent clauses, are compared (resolved), yielding a new clause that has
been inferred from them. The new clause represents ways that the two parent clauses
interact with each other. Suppose there are two clauses in the system:
winter V summer
¬winter V cold
Now we observe that precisely one of winter and ¬winter will be true at any point.
If winter is true, then cold must be true to guarantee the truth of the second clause.
If ¬winter is true, then summer must be true to guarantee the truth of the first clause.
Thus we see that from these two clauses we can deduce:
summer V cold
Resolution operates by taking two clauses that each contains the same literal, in this
example, winter. The literal must occur in positive form in one clause and in negative
form in the other. The resolvent is obtained by combining all of the literals of the two
parent clauses except the ones that cancel.
If the clause that is produced is the empty clause, then a contradiction has been found.
For example. the two clauses:
Winter
¬winter will produce the empty clause.
In predicate logic, the situation is more complicated since we must consider all
possible ways of substituting values for the variables.
The Unification Algorithm
In predicate logic, this matching process is more complicated since the arguments of
But now, if we simply continue and match x and z, we produce the substitution z/x.
But we cannot substitute both y and z for x. The problem can be solved as follows:
What we need to do after finding the first substitution y/x is to make that substitution
throughout the literals, giving:
Now we can attempt to unify arguments y and z, which succeeds with the substitution
z/y. The entire unification process has now succeeded with a substitution that is the
composition of the two substitutions: (z/y) (y/x).
In general, substitutions: (a1/a2, a3/a4,....)(b1/b2, b3/b4....) means to apply all the
substitutions of the right most list, then take the result and apply all the ones of the
next list, and so forth, until all substitutions have been applied.
The objective of the unification procedure is to discover at least one substitution that
causes two literals to match.
o For example, the literals:
hate (x, y)
hate (Marcus, z)
could be unified with any of the following substitutions:
(Marcus/x, z/y) (Marcus/x, y/z)
(Marcus/x, Caesar/y, Caesar/z) (Marcus/x, Polonius/y, Polonius/z)
We describe a procedure Unify (Ll, L2), which returns as its value a list representing
the composition of the substitutions that were performed during the match. The empty
list, NIL, indicates that a match was found without any substitutions. The list
consisting of the single value FAIL indicates that the unification procedure failed.
Algorithm: Unify (L1, L2)
1. If L1 and L2 are both variables and constants, then:
a. If L1 and L2 are identical, then return NIL.
b. Else if L1 is a variable, then if LI occurs in L2 then return {FAIL}, else return
(L2/L1).
c. Else if L2 is a variable then if L2 occurs in L1 then return {FAIL}, else return
(LI/L2)
d. Else return {FAIL}.
2. If the initial predicate symbols in LI and L2 are not identical, then return {FAIL}.
3. If Ll and L2 have a different number of arguments, then return {FAlL}.
4. Set SUBST to NIL. (At the end of this procedure, SUBST will contain all the
substitutions used to unify L1 and L2).
5. For i1 to number of arguments in LI:
a. Call Unify with the ith argument of LI and ith argument of L2, putting result
in S.
b. If contains FAIL, then return {FAIL}.
c. If S not equal to NIL then:
i. Apply S to the remainder of both L1 and L2.
ii. SUBST= APPEND(S, SUBST).
6. Return SUBST.
Resolution in Predicate Logic
With Unification, we now have an easy way of determining that two literals are
contradictory, if one of them can be unified with the negation of the other.
So, for example, man(x) and ¬man(Spot) are contradictory, since man(x) and
man(Spot) can be unified, with substitution x/spot.
In order to use resolution for expressions in the predicate logic, we use the unification
algorithm to locate pairs of literals that cancel out.
For example, suppose we want to resolve two clauses:
man(Marcus)
¬man(x1) V mortal(x1)
The literal man(Marcus)can be unified with the literal ¬man(x1) with the substitution
Marcus/x1. we can now conclude only that mortal(Marcus) must be true.
So the resolvent generated by clauses 1 and 2 must be mortal (Marcus), which we
get by applying me result of the unification process to the resolvent. The
resolution process can then proceed from there to discover whether mortal (Marcus)
leads to a contradiction with other available clauses.
Algorithm: Resolution in predicate logic
1. Convert all the statements of F to clause form.
2. Negate P and convert the result to clause form. Add it to the-set of clauses obtained in
1.
3. Repeat until a contradiction is found, no progress can be made, or a predetermined
amount of effort has been expended.
a. Select two clauses. Call these the parent clauses.
b. Resolve them together. The resolvent will be the disjunction of all the literals
of both parent clauses with appropriate substitutions performed and with the
following exception: If there is one pair of literals TI and ¬T2 such that one
of the parent clauses contains T1 and the other contains ¬T2 and if T1 and T2
are unifiable, then neither T1 nor T2 should appear in the resolvent. We call
T1 and T2 Complementary Literals. Use the substitution produced by the
unification to create the resolvent. If there is more than one pair of
complementary literals, only one pair should be omitted from the resolvent.
c. If the resolvent is the empty clause, then a contradiction has been found. If it is
not, then add it to the set of c1auses available to the procedure.
There are systematic strategies for making the choice of clauses to resolve together
each step so that we will find a contradiction if one exists. This can speed up the
process considerably.
o Only resolve pairs of clauses that contain complementary literals, since only
such resolutions produce new clauses that are harder to satisfy than their
parents. To facilitate this, index clauses by the predicates they contain,
combined with an indication of whether the predicate is negated Then, given a
particular clause, possible resolvent that contain a complementary occurrence
of one of its predicates can be located directly.
o Eliminate certain clauses as soon as they are generated so that they cannot
participate in later resolutions. Two kinds of clauses should be eliminated:
Tautologies (which can never be unsatisfied) and
Clauses that are subsumed by other clauses. For example, P⋁Q can be
subsumed by P.
o Whenever possible, resolve either with one of- the clauses that is part of the
statement we are trying to refute or with a clause generated by a resolution
with such a clause. This is called the “set-of-support strategy”.
o Whenever possible, resolve with clauses that have a single literal. Such
resolutions generate new clauses with fewer literals than the larger of their
parent clauses and thus are probably closer to the goal of a resolvent with zero
terms. This method is called the “unit-preference-strategy”.
Example 1:
Resolution Proof: