KEMBAR78
Artificial Intelligence | PDF | Artificial Intelligence | Intelligence (AI) & Semantics
0% found this document useful (0 votes)
10 views35 pages

Artificial Intelligence

The document discusses artificial intelligence (AI) including its benefits like efficiency and innovation, challenges like job displacement and bias, and common AI techniques like machine learning, natural language processing, and computer vision. It also discusses characteristics of AI applications in fields like astronomy, healthcare, gaming, and finance. Finally, it provides an introduction to the Lisp programming language including its history, features, syntax, numeric functions, and basic list manipulation functions.

Uploaded by

Sumit Chauhan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views35 pages

Artificial Intelligence

The document discusses artificial intelligence (AI) including its benefits like efficiency and innovation, challenges like job displacement and bias, and common AI techniques like machine learning, natural language processing, and computer vision. It also discusses characteristics of AI applications in fields like astronomy, healthcare, gaming, and finance. Finally, it provides an introduction to the Lisp programming language including its history, features, syntax, numeric functions, and basic list manipulation functions.

Uploaded by

Sumit Chauhan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 35

General Issues and Overview of AI the AI problems,

Artificial Intelligence (AI) is rapidly transforming our world, bringing both


benefits and challenges. Here's an overview of AI and some of the key
issues surrounding it:

Benefits of AI

 Efficiency and Productivity: AI can automate tasks, improve


decision-making, and optimize processes, leading to increased
efficiency and productivity across various industries.
 Innovation: AI is driving innovation in various fields, from drug
discovery and materials science to financial modelling and creative
content generation.
 Improved Quality of Life: AI has the potential to improve our
quality of life in many ways, such as by personalizing healthcare,
enhancing education, and developing safer transportation systems.

Challenges of AI

 Job displacement: One of the biggest concerns about AI is its


potential to automate jobs, leading to unemployment and
economic disruption.
 Bias and discrimination: AI systems can perpetuate and amplify
existing biases in data and decision-making, leading to
discriminatory outcomes.
 Safety and security: As AI systems become more complex, there
are growing concerns about their safety and security. For instance,
malicious actors could hack into AI systems and use them to cause
harm.
 Lack of transparency and accountability: It can be difficult to
understand how complex AI systems make decisions, making it
challenging to hold them accountable for errors or biases.
 Privacy concerns: AI systems often require large amounts of data
to function, raising concerns about data privacy and the potential
for misuse of personal information.

what is an AI technique-

Surendra Singh Rajput (LNCCMCA11150) (DNC)


AI techniques are the tools and methods used to create intelligent
systems that can perform tasks typically requiring human-like abilities.
These techniques encompass various approaches that allow AI systems
to learn from data, process information, identify patterns, and make
predictions.

Here are some of the common AI techniques:

 Machine Learning (ML): This is a core area of AI where algorithms


learn from data without being explicitly programmed. The data can
be labeled or unlabeled, and the algorithms can learn to perform
various tasks, such as classification, recommendation, and
prediction.
 Natural Language Processing (NLP): This technique allows AI
systems to understand and manipulate human language. NLP is
used in tasks like machine translation, sentiment analysis, and
chatbots.
 Computer Vision: This technique enables AI systems to interpret
and analyse visual information from digital images and videos.
 Deep Learning: This is a subfield of machine learning that uses
artificial neural networks inspired by the structure and function of
the human brain. Deep learning models are particularly effective at
handling complex data such as images, text, and speech.

Characteristics of AI applications-

1. AI in Astronomy

o Artificial Intelligence can be very useful to solve complex universe


problems. AI technology can be helpful for understanding the
universe such as how it works, origin, etc.

2. AI in Healthcare

o In the last, five to ten years, AI becoming more advantageous for


the healthcare industry and going to have a significant impact on
this industry.
o Healthcare Industries are applying AI to make a better and faster
diagnosis than humans.

Surendra Singh Rajput (LNCCMCA11150) (DNC)


3. AI in Gaming

o AI can be used for gaming purpose. The AI machines can play


strategic games like chess, where the machine needs to think of a
large number of possible places.

4. AI in Finance

o AI and finance industries are the best matches for each other. The
finance industry is implementing automation, chatbot, adaptive
intelligence, algorithm trading, and machine learning into financial
processes.

5. AI in Social Media

o Social Media sites such as Facebook, Twitter, and Snapchat contain


billions of user profiles, which need to be stored and managed in a
very efficient way. AI can organize and manage massive amounts
of data.

6. AI in Robotics:

o Artificial Intelligence has a remarkable role in Robotics. Usually,


general robots are programmed such that they can perform some
repetitive task, but with the help of AI.

7. AI in Entertainment

o We are currently using some AI based applications in our daily life


with some entertainment services such as Netflix or Amazon. With
the help of ML/AI algorithms, these services show the
recommendations for programs or shows.

8. AI in education:

o AI can automate grading so that the tutor can have more time to
teach. AI chatbot can communicate with students as a teaching
assistant.

Introduction to LISP programming-

Surendra Singh Rajput (LNCCMCA11150) (DNC)


Lisp is a programming language that has an overall style that is
organized around expressions and functions. LISP is known for its
unique syntax and powerful features, particularly its support for
symbolic expressions and dynamic typing.
Lisp is the second-oldest high-level programming language in the
world which is invented by John McCarthy in the year 1958 at the
Massachusetts Institute of Technology.
Features of LISP Programming Language:
1. It is a machine-independent language
2. It uses iterative design methodology and is easily extensible
3. It allows us to create and update the programs and applications
dynamically.
4. It provides high-level debugging.
5. It supports object-oriented programming.
6. It is an expression-based language
7. It will support input and output functions
8. By using lisp we can also create our own functions
Syntax and numeric functions-
In LISP, the syntax revolves around symbolic expressions, often
referred to as S-expressions. These expressions are enclosed in
parentheses and can contain atoms (symbols or literals) and other S-
expressions. Here's a brief overview of LISP syntax and some numeric
functions:
1. Syntax: In LISP, every expression is either an atom or a list.
Atoms can be symbols (variable names) or literals (constants).
Lists are enclosed in parentheses and can contain atoms or
other lists. For example:
 Atoms: x, 123, -3.14, +
 Lists: (x y z), (1 2 3), (+ 2 3), ((x) (y))

Surendra Singh Rajput (LNCCMCA11150) (DNC)


2. Numeric Functions: LISP provides a variety of built-in functions
for numerical computations. Here are some common numeric
functions:
 Arithmetic Operations:
 +: Addition
 -: Subtraction
 *: Multiplication
 /: Division
 Comparison Operators:
 =: Equality
 >: Greater than
 <: Less than
 >=: Greater than or equal to
 <=: Less than or equal to
 Mathematical Functions:
 sqrt: Square root
 exp: Exponential function
 log: Natural logarithm
 sin, cos, tan: Trigonometric functions (in radians)
 Rounding Functions:
 round: Rounds to the nearest integer
 floor: Rounds down to the nearest integer
 ceiling: Rounds up to the nearest integer
 Random Number Generation:
 random: Generates a random number between 0
and 1

Surendra Singh Rajput (LNCCMCA11150) (DNC)


 random n: Generates a random integer between 0
and n-1
Basic list manipulation functions-
In LISP, list manipulation is fundamental, as lists are a primary data
structure. LISP provides several built-in functions for creating,
accessing, modifying, and manipulating lists. Here are some basic list
manipulation functions:

1. **`cons`**: Constructs a new list by adding an element to the


front of an existing list.
```lisp
(cons 1 '(2 3 4)) ; => (1 2 3 4)
```
2. **`car`**: Returns the first element of a list.
```lisp
(car '(1 2 3 4)) ; => 1
```
3. **`cdr`**: Returns the rest of the list after the first element.
```lisp
(cdr '(1 2 3 4)) ; => (2 3 4)
```
4. **`nth`**: Returns the nth element of a list (0-indexed).
```lisp
(nth 2 '(1 2 3 4)) ; => 3
```
5. **`length`**: Returns the length of a list.

Surendra Singh Rajput (LNCCMCA11150) (DNC)


```lisp
(length '(1 2 3 4)) ; => 4
```
6. **`append`**: Appends two or more lists together.
```lisp
(append '(1 2) '(3 4)) ; => (1 2 3 4)
```
7. **`reverse`**: Reverses the elements of a list.
```lisp
(reverse '(1 2 3 4)) ; => (4 3 2 1)
```
8. **`list`**: Constructs a list from its arguments.
```lisp
(list 1 2 3 4) ; => (1 2 3 4)
```
9. **`nthcdr`**: Returns the nth CDR of a list.
```lisp
(nthcdr 2 '(1 2 3 4)) ; => (3 4)
```
10. **`member`**: Checks if an element is a member of a list and
returns the tail of the list starting with that element.
```lisp
(member 3 '(1 2 3 4)) ; => (3 4)
```
predicates and conditionals- chat GPT

Surendra Singh Rajput (LNCCMCA11150) (DNC)


input output and local variables-
LISP provides mechanisms for your programs to interact with the user and manage data
within specific code blocks. Let's explore input/output (I/O) functions and local variables.

Input/Output (I/O): Bringing Programs to Life

 LISP offers functions to retrieve data from the user (input) and display information
(output).

Common I/O Functions:

 read: Reads an expression from the user's input stream (usually the keyboard) and
evaluates it.
 print: Prints an expression to the standard output stream (usually the console).
 princ: Similar to print but doesn't add a newline after the output.
 (terpri): Inserts a newline character in the output stream.

Local Variables: Keeping Things Private

 Local variables are declared and accessible only within a specific code block,
typically a function definition.
 This helps prevent naming conflicts and promotes better code organization.

Creating Local Variables:

 let: This is a common way to define local variables within a function.


o Format: (let ((var1 value1) (var2 value2) ...) expression)
 var1, var2: Names of your local variables.
 value1, value2: Values assigned to the corresponding variables.
 expression: The code that uses the local variables.

iteration and recursion, property lists and array- Assignmet


file se padhna hai.

Unit-2
Problem Solving-

Surendra Singh Rajput (LNCCMCA11150) (DNC)


in Artificial Intelligence (AI), problem-solving refers to the techniques and algorithms that
enable machines to analyze situations, reason about them, and develop solutions to achieve
specific goals. Here's a breakdown of key concepts in AI problem-solving:

1. Problem Representation:

The first step is defining the problem in a way that an AI system can understand. This
involves:

 State Space: Defining all possible states the system can be in during the problem-
solving process.
 Operators: Actions that can be taken to transition from one state to another.
 Goal State: The desired state the system aims to reach.

2. Search Algorithms:

AI systems often employ search algorithms to navigate the state space and find a path to the
goal state. Common search algorithms include:

 Depth-First Search (DFS): Explores all possibilities down a single path before
backtracking.
 Breadth-First Search (BFS): Explores all states at a given depth level before moving
to the next level. Guaranteed to find the shortest path if it exists but can be memory-
intensive for large problems.
 Heuristic Search (e.g., A):* Combines BFS/DFS with a heuristic function that
estimates the cost of reaching the goal from a particular state.

3. Knowledge Representation and Reasoning:

AI systems can leverage knowledge representation techniques like logic, rules, and
ontologies to encode domain-specific knowledge.

4. Machine Learning:

Machine learning algorithms can be used to train AI systems on problem-solving tasks.


Techniques like reinforcement learning can enable an AI system to learn by trial and error.

5. Examples of Problem Solving in AI:

 Game playing
 Robotics:
 Natural Language Processing (NLP): AI systems can analyze text, translate
languages, and answer questions by understanding the context and relationships
between words.
 Medical diagnosis: AI systems can analyze medical data and images to assist doctors
in diagnosing diseases with improved accuracy.

Search and Control Strategies General problem solving-


Same in first.

Surendra Singh Rajput (LNCCMCA11150) (DNC)


production systems, control strategies forward and
backward chaining-
Production systems are a type of rule-based system used in artificial
intelligence for knowledge representation and problem-solving. They
consist of a set of rules (productions) that describe conditions and
actions. These rules are typically in the form of "if-then" statements,
where the "if" part specifies conditions and the "then" part specifies
actions to be taken if those conditions are met.
There are two primary control strategies used in production systems:
forward chaining and backward chaining.

A. Forward Chaining
Forward chaining is also known as a forward deduction or forward reasoning method
when using an inference engine. Forward chaining is a form of reasoning which start
with atomic sentences in the knowledge base and applies inference rules (Modus
Ponens) in the forward direction to extract more data until a goal is reached.

Properties of Forward-Chaining:

o It is a down-up approach, as it moves from bottom to top.


o Forward-chaining approach is also called as data-driven as we reach to the
goal using available data.
o Forward -chaining approach is commonly used in the expert system, such as
CLIPS, business, and production rule systems.

B. Backward Chaining:
Backward-chaining is also known as a backward deduction or backward reasoning
method when using an inference engine. A backward chaining algorithm is a form of
reasoning, which starts with the goal and works backward, chaining through rules to
find known facts that support the goal.

Properties of backward chaining:

o It is known as a top-down approach.


o Backward-chaining is based on modus ponens inference rule.
o In backward chaining, the goal is broken into sub-goal or sub-goals to prove
the facts true.

Surendra Singh Rajput (LNCCMCA11150) (DNC)


o It is called a goal-driven approach, as a list of goals decides which rules are
selected and used.
o The backward-chaining method mostly used a depth-first search strategy for
proof.

exhaustive searches depth first breadth first search.


Exhaustive search algorithms, such as depth-first search (DFS) and
breadth-first search (BFS), are used to systematically explore all
possible solutions to a problem. These algorithms are commonly
employed in graph theory and artificial intelligence for traversing
graphs or trees to find specific nodes or paths. Let's explore both DFS
and BFS:
1. Depth-First Search (DFS):
 In DFS, the search explores as far as possible along each
branch of the graph/tree before backtracking.
 It uses a stack (either explicitly or implicitly through
recursion) to keep track of the nodes to be visited.
 DFS is typically implemented using recursion or an explicit
stack data structure.
Breadth-First Search (BFS):
- BFS guarantees finding the shortest path (in terms of edges)
from the start node to any other reachable node.
- BFS is typically implemented using a queue data structure.

Heuristic Search Techniques Hill climbing-


Heuristic search techniques, such as hill climbing, are used to find
solutions to optimization problems, especially in cases where
exhaustive search methods like depth-first search or breadth-first
search are not feasible due to the large search space. Heuristic search
algorithms aim to efficiently explore the search space by making

Surendra Singh Rajput (LNCCMCA11150) (DNC)


informed decisions based on heuristic information or rules. Let's
delve into hill climbing:

Hill Climbing Algorithm in Artificial


Intelligence
o Hill climbing algorithm is a local search algorithm which continuously moves
in the direction of increasing elevation/value to find the peak of the mountain
or best solution to the problem.
o It is also called greedy local search as it only looks to its good immediate
neighbour state and not beyond that.
o A node of hill climbing algorithm has two components which are state and
value.
o Hill Climbing is mostly used when a good heuristic is available.

State-space Diagram for Hill Climbing:


o The state-space landscape is a graphical representation of the hill-climbing
algorithm which is showing a graph between various states of algorithm and
Objective function/Cost.

Steps of Hill Climbing:


1. Initialization: Start from an initial solution.

Surendra Singh Rajput (LNCCMCA11150) (DNC)


2. Evaluation: Evaluate the current solution using the objective
function.
3. Neighbourhood Generation: Generate neighbouring solutions
by making small modifications to the current solution.
4. Selection: Select the neighbouring solution that improves the
objective function value the most (if any).
5. Termination: Terminate when a local maximum is reached, i.e.,
when no neighbouring solution provides a better value.
Pros and Cons:
- Pros: Hill climbing is simple, easy to implement, and
computationally efficient for some problems.
- Cons: It may get stuck at local maxima and does not guarantee
finding the global optimum.

best first search & A* algorithm-


Best-First Search:

 Concept: A general class of search algorithms that prioritize exploring states with a
higher evaluation based on a heuristic function. It iteratively expands the most
promising node according to the evaluation function, aiming to find the goal state
efficiently.
 Implementation:
1. Start with an initial state.
2. Evaluate all neighboring states using the heuristic function.
3. Add the neighboring state with the highest evaluation to a priority queue
(ordered based on evaluation).
4. Repeat steps 2-3 until the goal state is reached or the queue is empty.

Strengths of Best-First Search:

 More Efficient than Exhaustive Search:


 Flexible:

Weaknesses of Best-First Search:

 Can Be Incomplete:
 Can Get Stuck in Loops:

Surendra Singh Rajput (LNCCMCA11150) (DNC)


A Search Algorithm:*

 Concept: A specific type of best-first search that combines two key elements:
o Heuristic Function (h(n)): Estimates the cost of reaching the goal state from
a particular state (n).
o Cost Function (g(n)): Represents the cost of reaching state (n) from the
starting state.
 Evaluation Function (f(n)): A* uses a combined evaluation function f(n) = g(n)
+ h(n). This function considers both the cost traveled so far (g(n)) and the estimated
cost to reach the goal (h(n)).
 A prioritizes exploring states with the lowest f(n) value*, aiming to find the most
efficient path to the goal that minimizes the total cost.

Strengths of A Search:*

 Optimality:.
 Efficiency:

Weaknesses of A Search:*

 Reliance on Admissible Heuristic:


 Computational Overhead

Choosing Between Best-First Search and A:*

 Use Best-First Search when:


o You have a general heuristic function that can guide the search but might not
be perfect.
o Optimality is not the primary concern, and finding a "good enough" solution
quickly is important.
 Use A* Search when:
o You can define an admissible heuristic function that accurately estimates the
cost to the goal.
o Finding the optimal solution (shortest path, minimum cost) is essential.

AND / OR graphs,
Best-first search is what the AO* algorithm does. The AO*
method divides any given difficult problem into a smaller group of
problems that are then resolved using the AND-OR graph concept.
AND OR graphs are specialized graphs that are used in problems that
can be divided into smaller problems. The AND side of the graph
represents a set of tasks that must be completed to achieve the main
goal, while the OR side of the graph represents different methods for

Surendra Singh Rajput (LNCCMCA11150) (DNC)


accomplishing the same main goal.

The start state and the target state are already known in the
knowledge-based search strategy known as the AO* algorithm,
and the best path is identified by heuristics. The informed
search technique considerably reduces the algorithm’s time
complexity. The AO* algorithm is far more effective in
searching AND-OR trees than the A* algorithm.
Working of AO* algorithm:
The evaluation function in AO* looks like this:

f(n) = g(n) + h(n)


f(n) = Actual cost + Estimated cost
here,
f(n) = The actual cost of traversal.
g(n) = the cost from the initial node to the current node.
h(n) = estimated cost from the current node to the goal
state.

Difference between the A* Algorithm and AO* algorithm

Surendra Singh Rajput (LNCCMCA11150) (DNC)


 A* algorithm and AO* algorithm both works on the best first
search.
 They are both informed search and works on given heuristics
values.
 A* always gives the optimal solution but AO* doesn’t
guarantee to give the optimal solution.
 When compared to the A* algorithm, the AO* algorithm
uses less memory.

UNIT-4
Natural Language processing Parsing techniques-
In Natural Language Processing (NLP), parsing is the fundamental
process of analysing a sentence's grammatical structure. It breaks
down a sequence of words into its constituent parts and explores the
relationships between them. Here's a breakdown of key parsing
techniques:
1. Rule-Based Parsing:
 Rule-based parsing techniques use a set of grammatical
rules to parse sentences.
 These rules are typically based on linguistic theories, such
as context-free grammars (CFGs) or phrase structure
grammars.
 Examples of rule-based parsing algorithms include top-
down parsing, bottom-up parsing, and Earley parsing.
2. Dependency Parsing:
 Dependency parsing focuses on identifying the syntactic
dependencies between words in a sentence.

Surendra Singh Rajput (LNCCMCA11150) (DNC)


 Popular dependency parsing algorithms include transition-
based parsing, graph-based parsing, and neural network-
based parsing.
3. Statistical Parsing:
 Statistical parsing techniques use statistical models
trained on large annotated corpora to parse sentences.
 Statistical parsing can handle ambiguity and variation in
natural language more effectively than rule-based
approaches.
 Common statistical parsing models include probabilistic
context-free grammars (PCFGs), maximum entropy
models, and neural network-based models like recurrent
neural networks (RNNs) or transformers.
4. Chart Parsing:
 Chart parsing is a dynamic programming technique used
to efficiently parse sentences by avoiding redundant
computations.
 Chart parsing is often used in conjunction with other
parsing techniques, such as rule-based or statistical
parsing.
5. Hybrid Parsing:
 Hybrid parsing techniques combine multiple parsing
approaches to leverage their respective strengths.

recursive transitions nets (RNT)-


Recursive Transition Networks (RTNs): A Powerful Tool for Parsing Complex
Structures

Recursive Transition Networks (RTNs) are a type of automaton used in Natural


Language Processing (NLP) to represent the structure and meaning of sentences.
They offer a powerful way to handle the recursive nature of language, where phrases
can be embedded within other phrases.
Structure:

Surendra Singh Rajput (LNCCMCA11150) (DNC)


 Nodes: Represent states in the parsing process. Each node can be labeled
with a category (e.g., noun phrase, verb phrase) or a specific word.
 Arcs: Directed edges connecting nodes. They are labeled with:
o Output symbols.
o Conditions.
 Start Node: A designated node where the parsing process begins.
 Pop Arc: A special arc that indicates the successful completion of a sub-
phrase and allows backtracking to higher levels of the parse tree.
Recursion in RTNs:
 An RTN's key strength lies in its ability to handle recursion. An arc can be
labelled with the category of a sub-phrase, leading back to the start node.
This allows for nested structures, like noun phrases within verb phrases (e.g.,
"the red car that John saw").
Parsing with RTNs:
1. Start at the designated start node.
2. Follow outgoing arcs based on the expected word category or symbol.
3. If a word matches the output symbol on the arc, traverse that arc.
4. If an arc has a condition, it must be satisfied before traversing.
Benefits of RTNs:
 Expressive Power: Can handle complex and nested structures in natural
language.
 Flexibility: Can be adapted to represent different grammatical formalisms.
 Visual Representation: The network structure provides a clear view of the
parsing process.
Applications of RTNs:
 Parsing
 Machine Translation
 Natural Language Understanding
Limitations of RTNs:
 Complexity: Designing and implementing RTNs for complex languages can
be challenging.
 Efficiency: Parsing with RTNs can be computationally expensive for very
long or ambiguous sentences.

augmented transition nets (ATN)-


Augmented Transition Networks (ATNs): Enhanced Power for Parsing Natural
Language

Augmented Transition Networks (ATNs) are a powerful evolution of Recursive


Transition Networks (RTNs), specifically designed to address some limitations and
enhance parsing capabilities in Natural Language Processing (NLP). Here's a
breakdown of ATNs and how they build upon RTNs:
Building on RTNs:
ATNs inherit the core structure of RTNs, including nodes, arcs, labels, and the
concept of recursion for handling nested structures in sentences. However, they
introduce three key enhancements:
1. Registers: These are temporary storage units within the network.

Surendra Singh Rajput (LNCCMCA11150) (DNC)


2. Arc Tests: In addition to output symbols and conditions, arcs in ATNs can
have associated tests.
3. Arc Actions: Actions can be attached to arcs. When an arc is traversed, the
associated action is executed.
Benefits of ATNs over RTNs:
 Increased Expressive Power.
 Flexibility.
 State Management.
Applications of ATNs:
 Natural Language Parsing: Used to analyze the grammatical structure of
sentences, identifying phrases and their relationships.
 Machine Translation: Can help in understanding the source language
sentence structure and generating a more accurate translation in the target
language.
 Natural Language Understanding Systems: Employed in systems that aim
to extract meaning and intent from spoken or written language.
Limitations of ATNs:
 Complexity: Designing and implementing ATNs can be more complex
compared to RTNs due to the additional features.
 Efficiency: Parsing with ATNs can be computationally expensive, especially
with intricate arc tests and actions.

Minimax search procedure-


The Minimax search procedure is a decision-making algorithm
commonly used in two-player games with alternating turns, such as
chess, checkers, or tic-tac-toe. It is designed to determine the best
move for a player by considering the possible outcomes of each
move and choosing the one that minimizes the maximum possible
loss. Here's how the Minimax search procedure works:
1. Tree Representation:
 The game state is represented as a tree, where each node
represents a possible game state, and the edges represent
possible moves.
2. Players:
 There are two players: a maximizing player and a
minimizing player.
 The maximizing player tries to maximize its score (or
minimize the opponent's score), while the minimizing

Surendra Singh Rajput (LNCCMCA11150) (DNC)


player tries to minimize its score (or maximize the
opponent's score).
3. Evaluation Function:
 Each leaf node of the tree is evaluated using an evaluation
function, which assigns a score to the state of the game.
4. Recursive Search:
 The Minimax algorithm recursively explores the game tree,
starting from the current state of the game.
 At each level of the tree, the algorithm alternates between
maximizing and minimizing:
5. Backtracking:
 As the algorithm explores deeper into the tree,

alpha-beta cutoffs-
Alpha-beta pruning is a powerful optimization technique used in
conjunction with the minimax search algorithm for two-player zero-
sum games (like chess or checkers). While minimax explores all
possible game states (represented as a tree) to find the optimal
move, alpha-beta pruning significantly reduces the number of nodes
explored, making the search more efficient.
Here's how alpha-beta pruning works:

Minimax Recap:

 Minimax Search: Explores all possible game states (game tree) to find
the move that leads to the best outcome for the maximizing player
(maximizing their score) and the worst outcome for the minimizing
player (minimizing their opponent's score).

Alpha-Beta Pruning:

Introduces two additional values:

Surendra Singh Rajput (LNCCMCA11150) (DNC)


 Alpha: Represents the highest score the maximizing player can
currently guarantee at a given point in the search.
 Beta: Represents the lowest score the minimizing player can be forced to
accept at a given point.

Pruning Logic:

1. Max Node: While exploring child nodes:


o If a child node's value is greater than the current alpha, update
alpha to that value (maximizing player can guarantee a better
score).
o If a child node's value is greater than or equal to beta, prune the remaining
child nodes.
2. Min Node: While exploring child nodes:

 If a child node's value is less than the current beta, update beta to that
value (minimizing player can achieve a better score).
 If a child node's value is less than or equal to alpha, prune the
remaining child nodes.

Example:

 Imagine a simplified tic-tac-toe game where X is maximizing and O is


minimizing. Let's say the current board state is:

- | X | -
-------
- | - | -
-------
- | - | O

 We're at a Max Node (X's turn). X can play in two squares (top left or
bottom right).

Without Alpha-Beta Pruning: We would explore all possible continuations of


the game for both squares.

With Alpha-Beta Pruning:

 We explore the top left move first. Let's say it leads to a terminal state
where X wins (value = +1). We set alpha to +1 (guaranteed win for X).

Surendra Singh Rajput (LNCCMCA11150) (DNC)


UNIT-5

Probabilistic Reasoning and Uncertainty Probability theory


Probability theory is the branch of mathematics concerned with the
analysis of random events.
Probabilistic reasoning is the process of using probability theory to
reason about uncertain events or propositions. It is a way of quantifying
the likelihood of different outcomes and making decisions based on
those likelihoods.
In real-world applications, probabilistic reasoning is used in various
ways, such as:
1. Predictive Modeling: Predicting future events or outcomes
based on historical data and probabilistic models. This is
commonly used in fields like finance, weather forecasting, and
healthcare.
2. Decision Making: Making decisions under uncertainty by
evaluating the expected utility of different actions.
3. Risk Assessment: Assessing and managing risk by quantifying
the likelihood and potential impact of different risks.
Probabilistic risk assessment is used in fields like engineering,
project management, and insurance.
4. Machine Learning: Many machine learning algorithms, such as
Bayesian networks, hidden Markov models, and Gaussian
processes, rely on probabilistic reasoning to model uncertainty
and make predictions from data.
5. Natural Language Processing: Probabilistic models are used in
natural language processing tasks such as speech recognition,
machine translation, and text classification to deal with
uncertainty inherent in language.

Surendra Singh Rajput (LNCCMCA11150) (DNC)


Uncertainty:
Uncertainty in probability theory refers to the lack of complete
knowledge or determinism about the outcomes of a particular event or
process. Probability theory provides a formal framework for quantifying
and reasoning about this uncertainty.

Soto represent uncertain knowledge, where we are not sure about the
predicates, we need uncertain reasoning or probabilistic reasoning.

Causes of uncertainty:

Following are some leading causes of uncertainty to occur in the real


world.

1. Information occurred from unreliable sources.


2. Experimental Errors
3. Equipment fault
4. Temperature variation
5. Climate change.

bayes theorem and bayesian networks,


Bayes' theorem is a fundamental concept in probability theory, named
after the Reverend Thomas Bayes. It provides a way to update our beliefs
about the probability of a hypothesis given new evidence.
Mathematically, Bayes' theorem is stated as:
P ( E ∣ H )× P (H )
P(H∣E) = P ( E)

Where:
 P(H∣E) is the probability of hypothesis H being true given the
evidence E.
 P(E∣H) is the probability of observing evidence E given that
hypothesis H is true.

Surendra Singh Rajput (LNCCMCA11150) (DNC)


 P(H) is the prior probability of hypothesis H being true (before
considering the evidence).
 P(E) is the probability of observing evidence E (also known as the
marginal likelihood or evidence).
Bayesian networks, also known as belief networks or probabilistic
graphical models, are a powerful tool for representing and reasoning
under uncertainty. They provide a graphical way to encode probabilistic
relationships between variables in a system. A Bayesian network consists
of two main components:
1. Nodes: Nodes represent random variables in the system, such as
events, states, or observations.
2. Edges: Edges represent probabilistic dependencies between
variables. An edge from node A to node B indicates that the
probability distribution of B depends on the value of A.
Applications of Bayesian networks include diagnostic systems in
healthcare, risk assessment in engineering, natural language processing,
and many other fields where uncertainty needs to be modelled and
reasoned about systematically.

certainty factor.
Certainty Factor (CF) is another approach for dealing with uncertainty in
knowledge-based systems, particularly in expert systems. It offers a
simpler alternative to probability theory in some situations. Here's a
breakdown of certainty factors:

What is a Certainty Factor?

 A numerical value between -1 and 1 that represents the degree of


belief in a hypothesis given some evidence.
 A CF of 1 indicates strong belief that the hypothesis is true based
on the evidence.
 A CF of -1 indicates strong belief that the hypothesis is false based
on the evidence.

Surendra Singh Rajput (LNCCMCA11150) (DNC)


 Values between -1 and 1 represent varying degrees of uncertainty,
with positive values favouring the hypothesis and negative values
favouring its negation.

Combining Certainty Factors:

There are different methods for combining certainty factors, but a


common approach is:

 CF(H | E) = CF(H) * CF(E | H)


o CF(H | E): Certainty factor of hypothesis (H) given evidence (E)
(what we want to find)
o CF(H): Prior certainty factor of hypothesis (H) (initial belief)
o CF(E | H): Certainty factor of evidence (E) given hypothesis (H)
(how much evidence supports H)

This approach considers both the initial belief in the hypothesis (prior CF)
and the strength of the evidence supporting it (CF of evidence given the
hypothesis).

Introduction to expert system and application of expert systems,

An expert system is a computer program that is designed to solve


complex problems and to provide decision-making ability like a human
expert. It performs this by extracting knowledge from its knowledge base
using the reasoning and inference rules according to the user queries.

Surendra Singh Rajput (LNCCMCA11150) (DNC)


Characteristics of Expert System

o High Performance: The expert system provides high performance


for solving any type of complex problem of a specific domain with
high efficiency and accuracy.
o Understandable: It responds in a way that can be easily
understandable by the user.
o Reliable: It is much reliable for generating an efficient and
accurate output.
o Highly responsive: ES provides the result for any complex query
within a very short period of time.

Applications of Expert System

o In designing and manufacturing domain


o In the knowledge domain
o In the finance domain
o In the diagnosis and troubleshooting of devices
o Planning and Scheduling

knowledge acquisition
Knowledge acquisition is the process of gathering, structuring, and
organizing information and expertise from various sources to create a
knowledge base. This knowledge base can then be used by different

Surendra Singh Rajput (LNCCMCA11150) (DNC)


systems, like expert systems, to solve problems, answer questions, or
make decisions.

Here's a breakdown of the key aspects of knowledge acquisition:

Importance:

 Knowledge acquisition is crucial for building intelligent systems


that can learn and reason effectively.
 It ensures the system has access to accurate and relevant domain
knowledge for performing its tasks.

Sources of Knowledge:

 Human Experts: Domain experts are a primary source of


knowledge.
 Textual Data: Existing documents, manuals, research papers, and
other textual resources can be mined for relevant information.
 Databases: Existing databases containing relevant data can be a
valuable source of knowledge.

Surendra Singh Rajput (LNCCMCA11150) (DNC)


MYCIN: It was one of the earliest backward chaining expert systems that was
designed to find the bacteria causing infections like bacteraemia and meningitis. It
was also used for the recommendation of antibiotics and the diagnosis of blood
clotting diseases.

UNIT-3
Knowledge Representations First order predicate calculus,
First-order predicate calculus (FOL), also known as first-order logic, is a
symbolic language used for knowledge representation in artificial intelligence.
It extends propositional logic by introducing variables and quantifiers, allowing
it to represent general statements about the world.

Here are the key elements of FOL:

 Predicates: Represent relations between objects. A predicate can take


multiple arguments (e.g., Likes(x, y) represents that x likes y).
 Terms: Represent objects or entities in the world. These can be variables
(x, y, z), constants (a specific name like Alice), or functions (f(x)).
 Atomic Formulas: Are the simplest statements, formed by a predicate
followed by its arguments (e.g., Likes(Alice, Bob)). They can be true or
false.
 Connectives: Allow us to combine atomic formulas into more complex
ones. These include the familiar AND, OR, and NOT, as well as
implication (→) and equivalence (↔).
 Quantifiers: Allow us to express statements about all or some elements
in a domain.

Surendra Singh Rajput (LNCCMCA11150) (DNC)


FOL allows for the expression of complex knowledge about the world.
For example, we can represent the statement "All cats are animals" as: ∀x
(Cat(x) → Animal(x))

FOL is a powerful tool for knowledge representation because it is:

 Expressive: It can represent a wide range of knowledge, including facts,


rules, and relationships.
 Declarative: It focuses on what is true, rather than how to achieve it.
 Logical: It allows for reasoning and inference based on the knowledge
represented.

skolemization,
In artificial intelligence and logic programming, Skolemization is a
process used to eliminate existential quantifiers (∃) from logical formulas.
It is a technique often applied in first-order logic and predicate logic.

Existential quantifiers (∃) express the existence of an object that satisfies a


certain property.

Skolemization aims to remove these existential quantifiers by


introducing Skolem functions or Skolem constants.

Here's a breakdown of how it works:

 Skolem functions: These are new function symbols introduced


during the skolemization process. They take variables from
universally quantified terms that appear before the existential
quantifier being eliminated as arguments.
 Skolem constants: In simpler cases, a constant symbol can be
introduced instead of a function, especially when there are no
universally quantified variables in scope.

By introducing Skolem functions/constants, skolemization essentially


replaces the statement "there exists something" with a specific term
constructed using the function/constant and existing variables.

There are a few reasons why skolemization is useful:

Surendra Singh Rajput (LNCCMCA11150) (DNC)


 Reasoning automation: Certain automated reasoning systems can
only handle formulas without existential quantifiers. Skolemization
helps prepare formulas for these systems.
 Model theory: In model theory, a branch of mathematical logic,
skolemization can be used to establish relationships between
logical formulas and their interpretations (models).

Example:

Consider the statement "There exists a person who is a doctor."

Using Skolemization, we can convert this statement into an equivalent


form without existential quantifiers. Let's say "D(x)" represents "x is a
doctor" and "P(x)" represents "x is a person".

Original statement: ∃x (P(x) ∧ D(x))

After Skolemization: P(f(y)) ∧ D(f(y))

In this Skolemized form, "f(y)" represents a Skolem function that


provides a specific doctor for any person "y" in the domain.

resolution principle & unification,


The resolution principle and unification are fundamental concepts in
automated theorem proving, particularly in resolution-based theorem
proving methods such as the Resolution Refutation Method .

### Resolution Principle:

Resolution is used, if there are various statements are given, and we need
to prove a conclusion of those statements. Unification is a key concept in

Surendra Singh Rajput (LNCCMCA11150) (DNC)


proofs by resolutions. Resolution is a single inference rule which can
efficiently operate on the conjunctive normal form or clausal form.

Here's a simplified outline of the resolution principle:

1. **Select Clauses**: Choose two clauses that contain complementary


literals. That is, one clause contains a literal, and the other contains the
negation of that literal.

2. **Resolve**: Apply the resolution inference rule by removing the


complementary literals from the two selected clauses and merging the
remaining literals into a new clause.

3. **Result**: The new clause obtained after resolution is called the


resolvent.

4. **Repeat**: Repeat the process of selecting clauses.

### Unification:

Unification is a process used to find a substitution that makes two terms


identical. In the context of automated theorem proving and logic
programming, unification is particularly important for resolving clauses
with variables. The most general unifier (MGU) is the most general
substitution that makes two terms identical.

How they work together:

Surendra Singh Rajput (LNCCMCA11150) (DNC)


1. Clause Selection: The resolution principle starts by selecting two
clauses from the knowledge base.
2. Unification: These clauses are then checked for complementary
literals. Unification is used to find a substitution that makes the
complementary literals identical.

Unification is crucial for resolution-based theorem proving because it


allows for the instantiation of variables to resolve clauses. It is also
essential in logic programming languages like Prolog, where it is used to
match predicates with their arguments.

horn's clauses,
Horn clauses are a specific type of logical formula used in various fields like
mathematical logic, logic programming, and knowledge representation. They
have a particular structure that makes them useful for automated reasoning and
efficient manipulation within computer systems.

Here's a breakdown of key aspects of Horn clauses:

Structure:

 A Horn clause is a disjunction (OR) of literals.


 A literal is either a positive predicate (unnegated) or a negative predicate
(negated with NOT).
 Crucially, a Horn clause can have at most one positive literal. This is
the defining characteristic.

There are three main types of Horn clauses based on the number and type of
literals:

1. Definite Clause (Strict Horn Clause): Exactly one positive literal and
zero or more negative literals.
2. Goal Clause: Only negative literals (no positive literal).
3. Horn Clause (General): At most one positive literal and zero or more
negative literals. (This is the general category encompassing definite and
goal clauses)

Properties and Uses:

Surendra Singh Rajput (LNCCMCA11150) (DNC)


 Horn clauses have several properties that make them attractive for logic
programming and automated reasoning:
o Efficiency: Reasoning with Horn clauses can be computationally
more efficient than with general first-order logic formulas.
o Declarative: They focus on what is true, rather than how to
achieve it, making them suitable for knowledge representation.
o Executable: In Prolog, a logic programming language, Horn
clauses act as program rules that can be directly executed to answer
queries and solve problems.
 Applications: Horn clauses are used in various domains:
o Logic Programming.
o Deductive.
o Automated Theorem Proving.

Overall, Horn clauses are a fundamental concept in logic programming and


knowledge representation.

semantic networks,
Semantic networks are a type of knowledge representation method used
in artificial intelligence (AI) to model the relationships between concepts.
They visualize knowledge as a web of interconnected nodes and edges,
where:

 Nodes: Represent concepts, entities, or objects in the world. These


can be anything from animals (dog, cat) to actions (run, jump) or
even abstract ideas (love, happiness).
 Edges: Represent the relationships between the nodes. These
edges can be labelled to specify the nature of the connection.
Common types of relationships include:
o Is-a (isa): Represents a hierarchical relationship, where one
concept inherits properties from another (e.g., Dog isa
Mammal).
o Has-a: Indicates a possession relationship between concepts
(e.g., Car has-a Wheel).
o Part-of: Represents a composition relationship, where one
concept is a component of another (e.g., Car Part-of Engine).

Surendra Singh Rajput (LNCCMCA11150) (DNC)


Applications of semantic networks:

 Natural Language Processing (NLP): Semantic networks can help


understand the meaning of words and sentences by capturing the
relationships between them. This is crucial for tasks like machine
translation, sentiment analysis, and question answering.
 Knowledge Representation: They provide a structured way to
organize and store knowledge in various fields, including medicine,
biology, and even commonsense reasoning.

In the above diagram, we have represented the different type of


knowledge in the form of nodes and arcs. Each object is connected with
another object by some relation.

conceptual dependency.
Conceptual Dependency (CD) theory, developed by Roger Schank, is a
model for understanding natural language meaning in artificial
intelligence. It focuses on representing the core meaning of a sentence,
rather than its surface grammar. Here are the key aspects of CD theory:

Basic Building Blocks:

Surendra Singh Rajput (LNCCMCA11150) (DNC)


 Conceptual primitives: These are fundamental actions, events,
and mental states that form the building blocks of meaning.
 Conceptual roles: These specify the participants involved in a
conceptual primitive

Advantages of CD:

 Language independence: CD allows for understanding and


reasoning across different languages.
 Focus on meaning: It goes beyond syntax to capture the
underlying meaning of a sentence. This is crucial for tasks like
machine translation and question answering.

Limitations of CD:

 Complexity
 Limited scope:
 Knowledge acquisition: Populating a knowledge base with
conceptual dependencies can be challenging.

Surendra Singh Rajput (LNCCMCA11150) (DNC)

You might also like