KEMBAR78
Artificial Intelligence Unit V: Reasoning | PDF | First Order Logic | Logic
0% found this document useful (0 votes)
21 views101 pages

Artificial Intelligence Unit V: Reasoning

Uploaded by

2004hardiksheth
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views101 pages

Artificial Intelligence Unit V: Reasoning

Uploaded by

2004hardiksheth
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 101

Artificial Intelligence

UNIT V
Reasoning

Prepared By
Mrs. Jayshri Dhere
Assistant Professor
Artificial Intelligence and Data Science
DYPIEMR, Pune
Syllabus
⚫ Inference in First-Order Logic,
◦ Propositional vs. First-Order Inference,
◦ Unification and First-Order Inference,
◦ Forward Chaining,
◦ Backward Chaining,
◦ Resolution,
⚫ Knowledge Representation,
◦ Ontological Engineering,
◦ Categories and Objects,
◦ Events,
◦ Mental Objects and Modal Logic,
◦ Reasoning Systems for Categories,
◦ Reasoning with Default Information
Propositional vs. First-Order
Inference
⚫ Propositional logic is an analytical
statement which is either true or false. It is
basically a technique that represents the
knowledge in logical & mathematical form.
There are two types of propositional logic;
Atomic and Compound Propositions.
Facts about Propositional Logic
⚫ Since propositional logic works on 0 and 1 thus it is also
known as ‘Boolean Logic’.
⚫ Proposition logic can be either true or false it can never
be both.
⚫ In this type of logic, symbolic variables are used in order
to represent the logic and any logic can be used for
representing the variable.
⚫ It is comprised of objects, relations, functions, and logical
connectives.
⚫ Proposition formula which is always false is called
‘Contradiction’ whereas a proposition formula which is
always true is called ‘Tautology’.
First-Order Logic (FOL)
⚫ First-Order Logic is another knowledge
representation in AI which is an extended
part of PL. FOL articulates the natural
language statements briefly. Another
name of First-Order Logic is ‘Predicate
Logic’.
Facts about First Order Logic
⚫ FOL is known as the powerful language
which is used to develop information
related to objects in a very easy way.
⚫ Unlike PL, FOL assumes some of the facts
that are related to objects, relations, and
functions.
⚫ FOL has two main key features or you can
say parts that are;
‘Syntax’ & ‘Semantics’.
Syntax
⚫ The lexicon of a first order language contains the following:
⚫ Connectives and Parentheses: ¬, →, ↔, ∧, ∨, ( and );
⚫ Quantifiers: ∀ (universal) and ∃ (existential);
⚫ Variables: x,y,z,…. ranging over particulars (individual objects);
⚫ Constants: a,b,c,….. representing a specific element;
⚫ Functions: f,g,h,…..., with arguments listed as f(x1,...xn)
⚫ Relations: R,S,…... with an associated arity.
⚫ There is an (countably infinite) supply of symbols (signature):
variables, functions, constants, and relations.
⚫ In other words: we can use these things to create ‘sentences’, like
we have in natural language, but then controlled and with a few extra
figurines.
⚫ Let us look first at how we can formalize a natural language
sentence into first order logic.
Semantics

⚫ Semantics of First-Order Logic As for all logics, the first step


in defining the semantics is to define the models of first-order
logic.
⚫ Recall that one of the benefits of using first-order logic is that
it allows us to explicitly talk about objects and relations
among them.
⚫ Thus, our models will contain objects along with information
about the properties of objects and the relationships among
objects.
⚫ For example, suppose that we have one predicate TallerThan,
one function FatherOf, and one constant Bob. A model M1
for these symbols might be the following:
⚫ D = { BOB, JON, NULL }
⚫ I(Bob) = BOB
⚫ I(TallerThan) = { (BOB, JON)}

⚫ Recall that I(FatherOf) is a function, so to give the


interpretation of FatherOf we will just show the value for
each input
⚫ I(FatherOf)(BOB) = JON
⚫ I(FatherOf)(JON) = NULL
⚫ I(FatherOf)(NULL) = NULL
Key differences between PL and FOL
⚫ Propositional Logic converts a complete sentence into a symbol
and makes it logical whereas in First-Order Logic relation of a
particular sentence will be made that involves relations,
functions, and constants.
⚫ The limitation of PL is that it does not represent any individual
entities whereas FOL can easily represent the individual
establishment that means if you are writing a single sentence
then it can be easily represented in FOL.
⚫ PL does not signify or express the generalization, specialization
or pattern for example ‘QUANTIFIERS’ cannot be used in PL but
in FOL users can easily use quantifiers as it does express the
generalization, specialization, and pattern.
Inference in First-Order Logic
Inference in First-Order Logic
⚫ Inference in First-Order Logic is used to deduce new facts or
sentences from existing sentences. Before understanding the
FOL inference rule, let's understand some basic
terminologies used in FOL.
⚫ Substitution:
Substitution is a fundamental operation performed on
terms and formulas. It occurs in all inference systems in
first-order logic.
The substitution is complex in the presence of quantifiers in
FOL. If we write F[a/x], so it refers to substitute a constant
"a" in place of variable "x".
Equality:
⚫ First-Order logic does not only use predicate and terms for
making atomic sentences but also uses another way, which is
equality in FOL. For this, we can use equality
symbols which specify that the two terms refer to the same
object.
⚫ Example: Brother (John) = Smith.
⚫ As in the above example, the object referred by the
Brother (John) is similar to the object referred by Smith.
The equality symbol can also be used with negation to
represent that two terms are not the same objects.
⚫ Example: ¬(x=y) which is equivalent to x ≠y.
Model checking
⚫ Given KB, does sentence S hold?

⚫ Basically generate and test:

⚫ Generate all the possible models

⚫ Consider the models M in which KB is TRUE

⚫ If ∀M S , then S is provably true

⚫ If ∀M ¬S, then S is provably false

⚫ Otherwise (∃M1 S ∧ ∃M2 ¬S): S is satisfiable but neither

provably true or provably false


FOL inference rules for quantifier:
⚫ As propositional logic we also have inference rules in
first-order logic, so following are some basic inference rules
in FOL:

⚫ Universal Generalization /Introduction


⚫ Universal Instantiation / Elimination
⚫ Existential Instantiation / Elimination
⚫ Existential Generalization / Introduction
Quantified inference rules
● Universal instantiation
● ∀x P(x) ∴ P(A)
● Universal generalization
● P(A) ∧ P(B) … ∴ ∀x P(x)
● Existential instantiation
● ∃x P(x) ∴P(F) ← skolem constant F
● Existential generalization
● P(A) ∴ ∃x P(x)

(Note : A constant that replaces an existentially quantified


variable is called a Skolem constant)
Generalized modus ponens (GMP)
⚫ Generalized Modus Ponens can be summarized as, " P
implies Q and P is asserted to be true, therefore Q must
be True."
⚫ Substitutions

⚫ subst(θ, α) denotes the result of applying a set of


substitutions defined by θ to the sentence α
⚫ A substitution list θ = {v1/t1, v2/t2, ..., vn/tn} means to
replace all occurrences of variable symbol vi by term ti
⚫ Substitutions are made in left-to-right order in the list
subst({x/IceCream, y/Ziggy}, eats(y,x)) = eats(Ziggy, IceCream)
Unification
⚫ It is all about making the expression look identical so
for the given expression to make them identical we
need to do substitution..
⚫ Unification is a process of making two different logical
atomic expressions identical by finding a substitution.
Unification depends on the substitution process.
⚫ Ex. P(x,f(y)), p(a,f(g/z))
x =a y= g(z)
Unification[a/x, g(z)/y]
In the above ex, substitute x with a & y with g/z. and it will be
represented as, a/x & g(z)/y
With both the substitution the first expression will be identical to
the second expression and the substitution set will be [a/x ,
g(z)/y].
Conditions for Unification
Following are some basic conditions for unification:

1. Predicate symbol P(x, y) must be same, atoms of


expression with different predicate symbol can never be
unified.

2. Number of arguments in both expression must be


identical.

3. Unification will fail if there are two similar variables


present in the same expression.
Unification algorithm
⚫ The UNIFY algorithm is used for unification, which takes
two atomic sentences and returns a unifier for those
sentences (If any exist).
⚫ Unification is a key component of all first-order inference
algorithms.
⚫ It returns fail if the expressions do not match with each
other.
⚫ The substitution variables are called Most General Unifier
or MGU.
Unification Algorithm: Algorithm: Unify(Ψ1, Ψ2)

⚫ Step. 1:
If Ψ1 or Ψ2 is a variable or constant, then:
a) If Ψ1 or Ψ2 are identical, then return NIL.
b) Else if Ψ1 is a variable,
a. then if Ψ1 occurs in Ψ2, then return FAILURE
b. Else return { (Ψ2/ Ψ1)}.
c) Else if Ψ2 is a variable,
a. If Ψ2 occurs in Ψ1 then return FAILURE,
b. Else return {( Ψ1/ Ψ2)}.
d) Else return FAILURE.
Step.2: If the initial Predicate symbol in Ψ1 and Ψ2 are not
same, then return FAILURE.
Step. 3: IF Ψ1 and Ψ2 have a different number of
arguments, then return FAILURE.
Step. 4: Set Substitution set(SUBST) to NIL.
Step. 5: For i=1 to the number of elements in Ψ1.
a) Call Unify function with the ith element of Ψ1 and ith
element of Ψ2, and put the result into S.
b) If S = failure then returns Failure
c) If S ≠ NIL then do,
a. Apply S to the remainder of both L1 and L2.
b. SUBST= APPEND(S, SUBST).
Step.6: Return SUBST.
Implementation of the Unification Algorithm
⚫ Step.1: Initialize the substitution set to be empty.

⚫ Step.2: Recursively unify atomic sentences:

⚫ Check for Identical expression match.

⚫ If one expression is a variable vi, and the other is a term ti which does
not contain variable vi, then:
⚫ Substitute ti / vi in the existing substitutions

⚫ Add ti /vi to the substitution set list.

⚫ If both the expressions are functions, then function name must be similar,
and the number of arguments must be the same in both the expression.

⚫ For each pair of the following atomic sentences find the most
general unifier (If exist).
1. Find the MGU of {p(f(a), g(Y)) and p(X, X)}
Sol: S0 => Here, Ψ1 = p(f(a), g(Y)), and Ψ2 = p(X, X)

SUBST θ= {f(a) / X}

S1 => Ψ1 = p(f(a), g(Y)), and Ψ2 = p(f(a), f(a))

SUBST θ= {f(a) / g(y)}, Unification failed.

Unification is not possible for these expressions.


Resolution in FOL
⚫ Resolution is a valid inference rule.

⚫ Resolution is a theorem proving technique that proceeds by building refutation (The


action of proving a statement or theory to be wrong or false.) proofs, i.e., proofs by
contradictions. It was invented by a Mathematician John Alan Robinson in the year
1965.

⚫ Resolution is used, if there are various statements are given, and we need to prove a
conclusion of those statements.

⚫ Unification is a key concept in proofs by resolutions. Resolution is a single inference


rule which can efficiently operate on the conjunctive normal form or clausal
form.

◦ Clause: Disjunction of literals (an atomic sentence) is called a clause. It is also


known as a unit clause.

◦ Conjunctive Normal Form: A sentence represented as a conjunction of clauses is


said to be conjunctive normal form or CNF.

Resolution
Resolution is a sound and complete inference procedure for FOL.

( The soundness property provides the initial reason for counting a logical system
as desirable. The completeness property means that every validity (truth) is
provable. Together they imply that all and only validities are provable.)

⚫ Reminder: Resolution rule for propositional logic:


◦ P1 ∨ P2 ∨ ... ∨ Pn
◦ ¬P1 ∨ Q2 ∨ ... ∨ Qm
◦ Resolvent: P2 ∨ ... ∨ Pn ∨ Q2 ∨ ... ∨ Qm

⚫ Examples
◦ P and ¬ P ∨ Q : derive Q (Modus Ponens)
◦ (¬ P ∨ Q) and (¬ Q ∨ R) : derive ¬ P ∨ R
◦ P and ¬ P : derive False [contradiction!]
◦ (P ∨ Q) and (¬ P ∨ ¬ Q) : derive True
Example:
⚫ We can resolve two clauses which are given below:
⚫ [Animal (g(x) V Loves (f(x), x)] and [¬ Loves(a, b)
V ¬Kills(a, b)]
⚫ Where two complimentary literals are: Loves (f(x), x) and ¬
Loves (a, b)

⚫ These literals can be unified with unifier


θ= [a/f(x), and b/x] ,
and it will generate a resolvent clause:
⚫ [Animal (g(x) V ¬ Kills(f(x), x)].
Steps for Resolution:
⚫ Conversion of facts into first-order logic.

⚫ Convert FOL statements into Clausal


Normal Form CNF

⚫ Negate the statement which needs to


prove (proof by contradiction)

⚫ Draw resolution graph (unification).


Example:

1. John likes all kind of food.


2. Apple and vegetable are food
3. Anything anyone eats and not killed
is food.
4. Anil eats peanuts and still alive
5. Harry eats everything that Anil eats.
Prove by resolution that:
6. John likes peanuts.
Step-1: Conversion of Facts into
FOL⚫ In the first step we will convert all the
given statements into its first order logic.
1. John likes all kind of
food.
2. Apple and vegetable
are food
3. Anything anyone eats
and not killed is food.
4. Anil eats peanuts and
still alive
5. Harry eats
everything that Anil
eats.
Prove by resolution
that:
6. John likes peanuts.
Step-2: Conversion of FOL into
CNF
In First order logic resolution, it is required to convert the
FOL into CNF as CNF form makes easier for resolution
proofs.

⚫ Eliminate all implication (→) and rewrite


◦ ∀x ¬ food(x) V likes(John, x)
◦ food(Apple) Λ food(vegetables)
◦ ∀x ∀y ¬ [eats(x, y) Λ ¬ killed(x)] V food(y)
◦ eats (Anil, Peanuts) Λ alive(Anil)
◦ ∀x ¬ eats(Anil, x) V eats(Harry, x)
◦ ∀x¬ [¬ killed(x) ] V alive(x)
◦ ∀x ¬ alive(x) V ¬ killed(x)
◦ likes(John, Peanuts).
⚫ Move negation (¬)inwards and rewrite
◦ ∀x ¬ food(x) V likes(John, x)
◦ food(Apple) Λ food(vegetables)
◦ ∀x ∀y ¬ eats(x, y) V killed(x) V food(y)
◦ eats (Anil, Peanuts) Λ alive(Anil)
◦ ∀x ¬ eats(Anil, x) V eats(Harry, x)
◦ ∀x ¬killed(x) V alive(x)
◦ ∀x ¬ alive(x) V ¬ killed(x)
◦ likes(John, Peanuts).
⚫ Rename variables or standardize
variables
◦ ∀x ¬ food(x) V likes(John, x)
◦ food(Apple) Λ food(vegetables)
◦ ∀y ∀z ¬ eats(y, z) V killed(y) V food(z)
◦ eats (Anil, Peanuts) Λ alive(Anil)
◦ ∀w¬ eats(Anil, w) V eats(Harry, w)
◦ ∀g ¬killed(g) ] V alive(g)
◦ ∀k ¬ alive(k) V ¬ killed(k)
◦ likes(John, Peanuts).
⚫ Eliminate existential instantiation
quantifier by elimination.
In this step, we will eliminate existential
quantifier ∃, and this process is known
as Skolemization. But in this example
problem since there is no existential
quantifier so all the statements will
remain same in this step.
Drop Universal quantifiers.
In this step we will drop all universal quantifier
since all the statements are not implicitly quantified
so we don't need it.
◦ ¬ food(x) V likes(John, x)
◦ food(Apple)
◦ food(vegetables)
◦ ¬ eats(y, z) V killed(y) V food(z)
◦ eats (Anil, Peanuts)
◦ alive(Anil)
◦ ¬ eats(Anil, w) V eats(Harry, w)
◦ killed(g) V alive(g)
◦ ¬ alive(k) V ¬ killed(k)
◦ likes(John, Peanuts).
Distribute conjunction ∧ over disjunction ¬.
This step will not make any change in this problem.
⚫ Step-3: Negate the statement to
be proved
⚫ In this statement, we will apply
negation to the conclusion statements,
which will be written as ¬likes(John,
Peanuts)
Step-4: Draw Resolution graph:
⚫ Now in this step, we will solve the problem by resolution tree
using substitution. For the above problem, it will be given as
follows: ⚫ In the first step of resolution graph, ¬likes(John,
Peanuts) , and likes(John, x) get
resolved(canceled) by substitution of {Peanuts/x},
and we are left with ¬ food(Peanuts)
⚫ In the second step of the resolution graph, ¬
food(Peanuts) , and food(z) get resolved
(canceled) by substitution of { Peanuts/z}, and we
are left with ¬ eats(y, Peanuts) V killed(y) .
⚫ In the third step of the resolution graph, ¬ eats(y,
Peanuts) and eats (Anil, Peanuts) get resolved
by substitution {Anil/y}, and we are left
with Killed(Anil) .
⚫ In the fourth step of the resolution
graph, Killed(Anil) and ¬ killed(k) get resolve by
substitution {Anil/k}, and we are left with ¬
alive(Anil) .
⚫ In the last step of the resolution graph ¬
alive(Anil) and alive(Anil) get resolved.

⚫ Hence the negation of the


conclusion has been proved
Explanation of Resolution graph:
⚫ In the first step of resolution graph,
¬likes(John, Peanuts) , and likes(John, x) get resolved(canceled) by
substitution of {Peanuts/x}, and we are left with ¬ food(Peanuts)
⚫ In the second step of the resolution graph, ¬ food(Peanuts) ,
and food(z) get resolved (canceled) by substitution of { Peanuts/z},
and we are left with ¬ eats(y, Peanuts) V killed(y) .
⚫ In the third step of the resolution graph, ¬ eats(y, Peanuts) and eats
(Anil, Peanuts) get resolved by substitution {Anil/y}, and we are left
with Killed(Anil) .
⚫ In the fourth step of the resolution graph, Killed(Anil) and ¬
killed(k) get resolve by substitution {Anil/k}, and we are left with ¬
alive(Anil) .
⚫ In the last step of the resolution graph ¬
alive(Anil) and alive(Anil) get resolved.
Forward Chaining and backward chaining in
AI
⚫ Inference engine:
The inference engine is the component of the intelligent
system in artificial intelligence, which applies logical rules to
the knowledge base to infer new information from known
facts. The first inference engine was part of the expert
system. Inference engine commonly proceeds in two modes,
which are:
⚫ Forward chaining
⚫ Backward chaining
Horn Clause and Definite clause
⚫ Horn clause and definite clause are the forms of sentences, which enables
knowledge base to use a more restricted and efficient inference
algorithm.
⚫ Logical inference algorithms use forward and backward chaining approaches,
which require KB in the form of the first-order definite clause.
⚫ Definite clause: A clause which is a disjunction of literals with exactly one
positive literal is known as a definite clause or strict horn clause.
⚫ Horn clause: A clause which is a disjunction of literals with at most one
positive literal is known as horn clause. Hence all the definite clauses are horn
clauses.
Example: (¬ p V ¬ q V k). It has only one positive literal k.
It is equivalent to p ∧ q → k.
Forward chaining
⚫ Proofs start with the given axioms/premises in KB, deriving
new sentences using GMP until the goal/query sentence is
derived
⚫ This defines a forward-chaining inference procedure
because it moves “forward” from the KB to the goal
[eventually]
⚫ Inference using GMP is complete for KBs containing only
Horn clauses
Forward chaining example
⚫ Example:
A
A -> B
B
—————————–
He is running.
If he is running, he sweats.
He is sweating.
Example
⚫ "As per the law, it is a crime for an American to sell
weapons to hostile nations. Country A, an enemy of
America, has some missiles, and all the missiles were
sold to it by Robert, who is an American citizen.“
⚫ Prove that "Robert is criminal."
⚫ To solve the above problem, first, we will convert all the
above facts into first-order definite clauses, and then we will
use a forward-chaining algorithm to reach the goal.
Facts Conversion into FOL:
⚫ It is a crime for an American to sell weapons to hostile nations. (Let's say p, q, and

r are variables)
American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) → Criminal(p) …(1)

⚫ Country A has some missiles.

⚫ Owns(A, p) ∧ Missile(p). It can be written in two definite clauses by using


Existential Instantiation, introducing new Constant T1.
Owns(A, T1) ......(2)
Missile(T1) .......(3)

⚫ All of the missiles were sold to country A by Robert.

Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A) ......(4)


⚫ Missiles are weapons.
Missile(p) → Weapons (p) .......(5)
⚫ Enemy of America is known as hostile.
Enemy(p, America) →Hostile(p) ........(6)
⚫ Country A is an enemy of America.
Enemy (A, America) .........(7)
⚫ Robert is American
American(Robert). ..........(8)
Forward chaining proof:
⚫ Step-1:
⚫ In the first step we will start with the
known facts and will choose the
sentences which do not have implications,
such as: American(Robert),
Enemy(A, America), Owns(A, T1),
and Missile(T1). All these facts will be
represented as below.
⚫ Step-2:
⚫ At the second step, we will see those facts which
infer from available facts and with satisfied
premises.
⚫ Rule-(1) does not satisfy premises, so it will not
be added in the first iteration.
⚫ Rule-(2) and (3) are already added.
⚫ Rule-(4) satisfy with the substitution {p/T1}, so
Sells (Robert, T1, A) is added, which infers
from the conjunction of Rule (2) and (3).
⚫ Rule-(6) is satisfied with the substitution(p/A), so
Hostile(A) is added and which infers from
Rule-(7).
⚫ Step-3:
⚫ At step-3, as we can check Rule-(1) is
satisfied with the substitution {p/Robert,
q/T1, r/A}, so we can add
Criminal(Robert) which infers all the
available facts. And hence we reached our
goal statement.
⚫ Hence it is proved that Robert is
Criminal using forward chaining
approach.
Forward chaining example
⚫ KB:
◦ allergies(X) → sneeze(X)
◦ cat(Y) ∧ allergic-to-cats(X) → allergies(X)
◦ cat(Felix)
◦ allergic-to-cats(Lise)
⚫ Goal:
◦ sneeze(Lise):
● allergic-to-cats(Lise)
● cat(Felix)
● cat(Felix) ∧ allergic-to-cats(Lise) → allergies(Lise)
● allergies(Lise) → sneeze(Lise)
● Therefore sneeze(Lise)
Forward chaining algorithm
Solve:-Our knowledge base contains the
combination of rules and facts about the
user profile.
1. John’s credit score is 780.
2. A person with a credit score greater than 700 has never defaulted
on their loan.
3. John has an annual income of $100,000.
4. A person with a credit score greater than 750 is a low-risk
borrower.
5. A person with a credit score between 600 to 750 is a medium-risk
borrower.
6. A person with a credit score less than 600 is a high-risk borrower.
7. A low-risk borrower can be given a loan amount up to 4X of his
annual income at a 10 percent interest rate.
8. A medium-risk borrower can be given a loan amount of up to 3X of
his annual income at a 12 percent interest rate.
9. A high-risk borrower can be given a loan amount of up to 1X of his
annual income at a 16 percent interest rate.
⚫ QUESTION
⚫ What max loan amount can be
sanctioned for John?
⚫ What will the interest rate be?
⚫ John’ CS = 780 AND CS > 750 are Low
Risk Borrower → John is a Low Risk
Borrower
⚫ Loan Amount for Low Risk Borrower is
4X annual income AND John’s annual
income is $100k
⚫ → Max loan amount that can be
sanctioned is $400k at a 10% interest
rate.
Backward chaining
⚫ Backward-chaining deduction using GMP is also
complete for KBs containing only Horn clauses
⚫ Proofs start with the goal query, find rules with that
conclusion, and then prove each of the antecedents in
the implication
⚫ Keep going until you reach premises
⚫ Avoid loops: check if new subgoal is already on the
goal stack
⚫ Avoid repeated work: check if new subgoal
◦ Has already been proved true
◦ Has already failed
Backward chaining example
⚫ Example:
B
A -> B
A
—————————–
He is sweating.
If he is running, he sweats.
He is running.
Example:
⚫ American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧
hostile(r) → Criminal(p)
...(1)
Owns(A, T1) ........(2)
⚫ Missile(T1) …….(3)
⚫ Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A)
......(4)
⚫ Missile(p) → Weapons (p)
.......(5)
⚫ Enemy(p, America) →Hostile(p)
........(6)
⚫ Enemy (A, America) .........(7)
⚫ American(Robert). ..........(8)
Backward-Chaining proof:
⚫ Step-1:
⚫ At the first step, we will take the goal fact. And
from the goal fact, we will infer other facts, and
at last, we will prove those facts true. So our
goal fact is "Robert is Criminal," so following is
the predicate of it.
⚫ Step-2:
⚫ At the second step, we will infer other facts form goal
fact which satisfies the rules. So as we can see in
Rule-1, the goal predicate Criminal (Robert) is present
with substitution {Robert/P}. So we will add all the
conjunctive facts below the first level and will replace p
with Robert.
⚫ Here we can see American (Robert) is a fact, so
it is proved here.
⚫ Step-3:t At step-3, we will extract further fact
Missile(q) which infer from Weapon(q), as it
satisfies Rule-(5). Weapon (q) is also true with
the substitution of a constant T1 at q.
⚫ Step-4:
⚫ At step-4, we can infer facts Missile(T1) and Owns(A,
T1) form Sells(Robert, T1, r) which satisfies the Rule-
4, with the substitution of A in place of r. So these
two statements are proved here.
⚫ Step-5:
⚫ At step-5, we can infer the fact Enemy(A,
America) from Hostile(A) which satisfies Rule-
6. And hence all the statements are proved true
using backward chaining.
Backward chaining example
⚫ KB:
◦ allergies(X) → sneeze(X)
◦ cat(Y) ∧ allergic-to-cats(X) → allergies(X)
◦ cat(Felix)
◦ allergic-to-cats(Lise)
⚫ Goal:
◦ sneeze(Lise)
Backward chaining algorithm
Solve
⚫ KNOWLEDGE BASE
⚫ We have few facts and rules that constitute
our knowledge base:
1. John is taller than Kim
2. John is a boy
3. Kim is a girl
4. John and Kim study in the same class
5. Everyone else other than John in the class is
shorter than Kim
⚫ QUESTION
⚫ Is John the tallest boy in class?
S. Forward Chaining Backward Chaining
No.
1. Forward chaining starts from known facts and Backward chaining starts from the goal and
applies inference rule to extract more data unit it works backward through inference rules to find
reaches to the goal. the required facts that support the goal.

2. It is a bottom-up approach It is a top-down approach


3. Forward chaining is known as data-driven Backward chaining is known as goal-driven
inference technique as we reach to the goal technique as we start from the goal and divide
using the available data. into sub-goal to extract the facts.

4. Forward chaining reasoning applies a Backward chaining reasoning applies a depth-first


breadth-first search strategy. search strategy.
5. Forward chaining tests for all the available rules Backward chaining only tests for few required
rules.
6. Forward chaining is suitable for the planning, Backward chaining is suitable for diagnostic,
monitoring, control, and interpretation prescription, and debugging application.
application.
7. Forward chaining can generate an infinite Backward chaining generates a finite number of
number of possible conclusions. possible conclusions.

8. It operates in the forward direction. It operates in the backward direction.


9. Forward chaining is aimed for any conclusion. Backward chaining is only aimed for the required
data.
Knowledge Representation
Ontological Engineering

⚫ What Is An Ontology
◦ An ontology is an explicit description of a
domain:
● Concepts
● properties and attributes of concepts
● constraints on properties and attributes
◦ An ontology defines
● a common vocabulary
● a shared understanding
Ontological Engineering
⚫ What Is “Ontology Engineering”?
◦ Ontology Engineering: Defining terms in the domain
and relations among them
● Defining concepts in the domain (classes)
● Arranging the concepts in a hierarchy (subclass-superclass
hierarchy)
● Defining which attributes and properties (slots) classes
can have and constraints on their values
● Defining individuals and filling in slot values
◦ In computer science and information system, an
ontology formally represents knowledge as a set of
concepts within a domain, and the relationships
among those concepts.
Ontological Engineering
⚫ Ontology of the word
Ontological Engineering

⚫ Two major characteristics of general-purpose


ontologies :
◦ A general-purpose ontology should be applicable in more
or less any special-purpose domain (with the addition of
domain-specific axioms). This means that no
representational issue can be finessed or brushed under
the carpet.
◦ In any sufficiently demanding domain, different areas of
knowledge must be unified, because reasoning and
problem solving could involve several areas simultaneously.
A robot circuit-repair system, for instance, needs to
reason about circuits in terms of electrical connectivity
and physical layout, and about time, both for circuit timing
analysis and estimating labor costs
Ontological Engineering
⚫ Layers of Functional Ontology and Knowledge
Categories and Objects

⚫ First-order logic makes it easy to state facts


about categories, either by relating objects to
categories or by quantifying over their
members.
⚫ Some types of facts :
◦ An object is member of category
◦ A category is a subclass of another category
◦ All members of category have some properties
◦ Member of a category can be recognized by some
properties
◦ A category as a whole has some properties
Categories and Objects

⚫ The organization of objects into


categories is a vital part of Knowledge
Representation.
⚫ Important relationships are
◦ subclass relation (AKO - a kind of)
<category> AKO <category>.
◦ instance relation ( ISA - is a)
<object> ISA <category>.
Categories and Objects
⚫ Categories
◦ Category is a kind of set and denotes a set of objects.
◦ A category has a set of properties that is common to all its
members.
◦ Categories are formally represented in logic as predicates,
but we will also regard categories as a special kind of objects.
◦ We then actually introduce a restricted form of second
order logic, since the terms that occur may be predicates.
◦ Example: Elephants and Mammals are categories.
◦ The set denoted by Elephants is a subset of the set denoted
by Mammals.
◦ The set of properties common to Elephants is a superset of
the set of properties common to Mammals.
Categories and Objects
⚫ Taxonomy
◦ Subcategory relations organize categories into a
taxonomy or taxonomic hierarchy.
◦ Other names are type hierarchy or class hierarchy.
◦ We state that a category is a subcategory of
another category by using the notation for subsets
Basketball ⊆ Ball
◦ We will also use the notation
ako (basketball, ball).
Categories and Objects
⚫ Category Representations
◦ There are two choices of representing categories in first order
logic: predicates and objects.
◦ That is, we can use the predicate Basketball(b) or we can reify the
category as an ”object” basketball. We could then write
member(x,basketball) or
x ∈ basketball
◦ We will also use the notation
isa(x,basketball).
◦ Basketball is a subset or subcategory of Ball, which is abbreviated
Basketball Ball
◦ We will also use the notation
ako(basketball,ball).
Categories and Objects

⚫ Physical Compositions
◦ One object can be a part of another object.
◦ Example, declaring direct
● partspart(bucharest,romania).
● part(romania,eastern_europe).
● part(europe,earth).
◦ We can make a transitive extension part of
part(Y,Z) and partof(X,Y) => partof(X,Z).
◦ and reflexive (*)
partof(X,X).
◦ Therefore we can conclude that partof
(bucharest, earth)
(*) depending on definition
Events
⚫ Events are described as instances of event categories.
“The event E1 of Shankar flying from San Francisco
to Washington DC” is described as :
◦ E1 ∈ Flyings ∧ Flyer(E1,Shankar) ∧ Origin(E1, SF) ∧
Destinations(E1, DC).
⚫ Process : Categories of event with these property
⚫ Intervals: events that include as sub-events all events
occurring in a given time period (thus they are
temporal sections of the entire spatial universe).
Events
⚫ Places: spatial sections of the
spatio-temporal universe that extend
through time.
⚫ Location function: maps an object to the
smallest place that contains it:∀x,l
Location(x) = l ⇔ At(x, l) ∧ ∀ll At(x, ll)
→ In(l, ll)
Events
⚫ Human Activity Detection
◦ Nevatia/Medioni/Cohen
Events
Mental Events and Mental Objects
Vocabulary

⚫ Logical Omniscience – to create an agent


with infallible logic.
⚫ Knowledge Preconditions – Something
which requires other knowledge to perform.
Such as knowing someone’s phone number
or having it saved before calling them.
⚫ Knowledge Effects – Knowledge that allows
you to perform an action, such as knowing
someone’s number has the effect of enabling
you to call said person.
Knowledge about knowledge
⚫ In order for us to use inference in the best way possible we
must construct a knowledge base about our knowledge base.
⚫ This may seem redundant, but if you know that you need a
map in order to know where things are and do not have one,
you know that the first step is to acquire a map, as opposed
to attempting to calculate the distance based on no
knowledge of the given geography what-so-ever, or simply
saying that it’s unknown. This way we have a plan on what
actions are needed and why certain data is important.
( Knowledge Precondition that having a map has a desired
Knowledge Effect )
Belief vs Knowledge
⚫ Knowledge:- in terms of artificial intelligence is something which is always
true.
⚫ Belief on the other hand deals more with probability.
⚫ The same somewhat applies to life, without getting too abstract, a simple
example is that we know that eating food is necessary for living, so we eat.
We do what we know is beneficial, we also do things which we believe are
beneficial, such as attempting to gain money. One example of belief in society
is gambling, we don’t know that we’ll win, but we know that there’s a
probability that we’ll win, and we’ll place a bet if we believe we will win.
⚫ Belief is also something which can change over time, for example, Lois
believes Superman flies today. T(Believes(Lois,”Flies(Superman)”),Today).
⚫ The same will not be true in 100 years when Superman’s died of old age.
Referential Transparency and
Referentially Opaque
⚫ Referential Transparency – One term can be
interchanged with another term.
⚫ ie. Clark = Superman
⚫ .·. Flies(Clark) = Flies(Superman)
⚫ e.g. if agent knows that 2+2=4 and 4<5, then
agent should know that 2+2<5

⚫ Referentially Opaque – The opposite of


Referential Transparency, meaning that you
cannot swap two equal terms. It is not
referential transparent.
Why this matters…
⚫ Just because Lois believes Superman can fly, doesn’t mean
she believes that Clark can fly. By making the
Flies(Superman) opaque, we can then say Believes(Lois,
Flies(Superman)), without saying
Believes(Lois, Flies(Clark)).
⚫ Syntactic Theory
◦ Mental objects are represented by strings.
◦ Even if Clark = Superman, “Clark” != “Superman.
◦ .·. The objects are treated separately, while still maintaining
equality (Separate but equal).
Modal Logic
⚫ Modal Logic simply defines Believes and Knows as opaque,
and therefore no terms in the sentence are exchanged.
⚫ designed to allow referential opacity into knowledge
base.
⚫ regular logic is concerned with single modality (the
modality of truth), allowing us to express “P is true”
⚫ modal logic includes modal operators that takes
sentences (rather than terms) as arguments (e.g. “A knows
P” is represented with notation KAP where K is the modal
operator for knowledge, A is the agent, and P is a
sentence).
Reasoning Systems for Categories
⚫ Reasoning Systems For Categories are the
primary building blocks of any large-scale
knowledge representation scheme.
⚫ This topic describes systems specially designed
for organizing and reasoning with category.
⚫ There are two types of Reasoning Systems
◦ Semantic networks and description logic.
Semantic networks.
⚫ Semantic networks provides graphical aids for visualizing a
knowledge base in patterns of interconnected nodes and arcs.
⚫ It has an efficient algorithms for inferring of and object on the
basis of its category membership.
⚫ In 1909 Charles Peirce proposed a graphical notation of nodes
and arcs called existential graphs that is a visual notation for
logical expressions.
⚫ A typical graphical notation displays objects or categories
names in oval or boxes and connects them with labeled arcs.
⚫ For example a MemberOf link between Mary and female
persons corresponds to the logical assertion Mary E (element)
of female person.
Semantic networks.
⚫ We can connect categories using SubsetOf of links
⚫ A single box used to assert properties of every member of a
category.
⚫ Semantic network notation makes it very convenient to perform
inheritance reasoning.
⚫ For example by virtue of being a person marry inherits the property
of having two legs.
⚫ The simplicity and efficiency of this inference mechanism, compared
with logical theorem has been one of the main attractions of semantic
networks.
⚫ Inheritance becomes complicated when an object belong to more
than one category
⚫ And also when a category can be a subset of more than one other
category which is called multiple inheritance.
Semantic networks
⚫ Multiple inheritance can cause the algorithm to find two or
more conflicting values answering a query.
⚫ For this reason multiple inheritance is banned in some object
oriented programming like Java.
⚫ Another form of inference is the use of inverse links.
⚫ For example HasSister is the inverse of SisterOf. Its ok to
have inverse links as long as they are made into objects in
their own right.
⚫ Semantic network provide direct indexing only for objects,
categories and the links originating/emanting from them
⚫ The drawback is that links between bubbles represent only
one binary relation.
Description logics
⚫ They are logics notations that are designed to make it
easier to describe definition and properties of
categories.
⚫ It evolved from semantic network
⚫ The principal inference task for description logic are
checking if one category is a subset of another by
comparing their definition.
⚫ By cheecking whether the membership criteria are
logically certifiable.
Description logics
⚫ CLASSIC language is a typical description logic.
⚫ Any CLASSIC can be written in First Order Logic
⚫ Description logics emphasis on tractability of
inference
⚫ Hard problems cannot be stated or require large
descriptions.
Reference
• Text Books:
• Stuart Russell and Peter Norvig, “Artificial
Intelligence: A Modern Approach”, Third
edition, Pearson, 2003, ISBN :10: 0136042597
Thank You

You might also like