Unit1- Chp1.
Q1. What is Artificial Intelligence (AI)?
Artificial Intelligence (AI) is the field of computer science that focuses on creating machines that can perform tasks
that typically require human intelligence.
-There are four main ways to define AI: (Approaches)
1. Acting Humanly –
AI tries to imitate human behavior, especially in communication.
The Turing Test checks if a machine's responses are indistinguishable from a human’s.
It requires language understanding, reasoning, learning, and perception.
2. Eg- A chatbot like ChatGPT or a virtual assistant (e.g., Siri) that can hold a natural conversation and answer
questions like a human.
3. Thinking Humanly –
AI aims to think like humans by studying how the human brain works using introspection, psychology,
and brain imaging.
This approach is linked to cognitive science.
Eg- The General Problem Solver (GPS) developed by Newell & Simon, which mimics human problem-
solving steps.
4. Thinking Rationally –
AI think Based on logic and reasoning, this approach tries to make machines think in a perfectly logical
way (like solving problems with correct steps, e.g., syllogisms).
It focuses on making logically correct decisions.
Eg- IBM Watson uses logic and rules to diagnose diseases and recommend treatments based on patient
data.
5. Acting Rationally –
This is the most common approach today. AI agents aim to do the right thing to get the best result, even
when dealing with uncertainty.
It combines learning, reasoning, and adapting to new situations.
Eg- A self-driving car that makes real-time decisions to navigate traffic safely, considering road conditions,
obstacles, and destination
Q2. What is the Foundation of Artificial Intelligence (AI)?
The foundation of Artificial Intelligence is built on several interdisciplinary fields.
These fields provide the theories, models, techniques, and principles needed to create machines that can mimic or
simulate intelligent behavior.
There are 7 core foundations of AI:
1. Mathematics
Mathematics forms the backbone of AI, offering tools and frameworks to model and solve problems.
Helps machines deal with uncertainty and make predictions (e.g., weather forecasting, spam filtering).
Crucial for machine learning algorithms and neural networks.
Example: Predicting housing prices using regression (a statistical method)
2. Computer Science
CS Provides the programming, data structures, and algorithms needed to implement AI systems.
Enables step-by-step procedures for solving tasks (e.g., A* search, sorting).
Example: Developing a chess-playing bot using the minimax algorithm.
3. Psychology
AI draws inspiration from human thinking and behavior, as studied in psychology.
It Helps understand how people learn, reason, and solve problems..
Example: Creating virtual tutors that adapt to students’ learning styles.
4. Neuroscience
Neuroscience Studies the structure and functioning of the brain, influencing neural network development.
Deep Learning models are inspired by how neurons and synapses work.
Helps model learning and memory processes in machines.
Example: DeepMind’s AlphaGo, which uses deep neural networks trained like the human brain.
5. Linguistics
Linguistics is the study of language, crucial for Natural Language Processing (NLP) in AI.
Helps machines understand, interpret, and generate human language.
Involves syntax (grammar), semantics (meaning), and pragmatics (context).
Example: Google Translate or ChatGPT generating human-like responses.
6. Philosophy
It Addresses fundamental questions about intelligence, consciousness, and ethics.
Contributes to logic and reasoning used in AI systems and guides the development of ethical AI.
Example: Isaac Asimov’s Three Laws of Robotics, a philosophical approach to AI ethics.
7. Control Theory and Cybernetics
Control Theory focuses on how systems respond to feedback to reach desired goals.
Cybernetics studies how systems (biological or mechanical) self-regulate using feedback loops.
Example: A self-driving car adjusting speed based on traffic using sensors and control systems.
Q3. What is history of Artificial Intelligence (AI)?
1943: Warren McCulloch and Walter Pitts proposed the first mathematical model of artificial neurons.
1950: Alan Turing published "Computing Machinery and Intelligence", introducing the Turing Test, and
discussing machine learning and genetic algorithms.
1956: John McCarthy organized the Dartmouth Conference, coining the term “Artificial Intelligence.” It
included pioneers like Minsky, Newell, and Simon.
1958–1969: Early AI programs like the Logic Theorist and General Problem Solver were developed. Arthur
Samuel’s checkers program introduced reinforcement learning. John McCarthy created the LISP language and
proposed the Advice Taker.
1970s–1980s: AI shifted to expert systems like MYCIN and DENDRAL, which used domain-specific knowledge.
The AI Winter began due to high expectations and limitations in hardware and algorithms.
Mid-1980s: Neural networks returned with backpropagation; connectionist models gained popularity.
Late 1980s–2000s: AI embraced probabilistic reasoning and machine learning. Judea Pearl developed
Bayesian networks, and reinforcement learning linked with Markov Decision Processes.
2000s: Rise of big data improved AI performance significantly, enabling advancements in speech recognition,
translation, and computer vision.
2011–Present: The era of deep learning began, using neural networks with many layers. Breakthroughs like
ImageNet (2012) and AlphaGo (2016) showcased AI surpassing human performance in several tasks.
Q4.What are the risk and benefits of AI?
Benefits of AI
1. Efficiency & Automation- AI performs repetitive tasks faster and without fatigue (e.g., manufacturing, data
entry).
2. 24/7 Availability - AI systems can operate continuously without breaks, unlike humans.
3. Accurate Decision Making - AI helps in data-driven decisions (e.g., medical diagnosis, fraud detection).
4. Personalization - AI powers personalized services (e.g., Netflix recommendations, targeted ads).
5. Innovation & Discovery - AI assists in scientific research, drug discovery, and space exploration.
6. Cost Reduction - Long-term use of AI systems reduces labor and error-related costs.
Risks of AI
1. Job Displacement - Automation may lead to unemployment in routine and low-skill sectors.
2. Bias & Discrimination - AI can inherit biases from training data, leading to unfair decisions.
3. Loss of Privacy - AI-enabled surveillance and data tracking threaten individual privacy.
4. Security Threats - AI can be misused for cyberattacks, deepfakes, or autonomous weapons.
5. Overdependence - Relying too much on AI can reduce human critical thinking and decision-making skills.
6. Unaligned Objectives - If AI systems misinterpret goals, they can act in unintended or harmful ways (value
alignment problem).
Unit1- Chp2.
Q1. AI Agents and Environments-
An agent is anything that perceives its environment through sensors and acts upon it using actuators.
Human agent: eyes/ears (sensors), hands/legs (actuators)
Robot agent: cameras/sensors (sensors), motors (actuators)
Software agent: gets input from files, network, user; acts by sending data or displaying info
The environment is the part of the world the agent interacts with. It could be everything—the entire universe!. In
practice it is just that part of the universe whose state we care about when designing this agent—the part that affects
what the agent perceives and that is affected by the agent’s actions.
We use the term –
A percept is what the agent senses at a time.
A percept sequence is the complete history of percepts.
The agent function maps percept sequences to actions.
The agent program is the actual code implementing the agent function.
Eg- Vacuum Cleaner Agent:
Two squares: A and B (either clean or dirty).
Percepts: current location and dirt status.
Actions: move left, move right, suck dirt, do nothing.
A simple agent rule:
If square is dirty → suck; else → move to other square.
Q2. Good Behavior: The Concept of Rationality
In Artificial Intelligence, a rational agent is one that does the right thing to maximize its performance based on
the information it receives.
AI uses a concept called consequentialism, which means an agent’s actions are judged by their consequences.
A performance measure evaluates how successful the agent is in achieving its goals.
For Example, in a vacuum-cleaning robot, measuring performance by how long the floor stays clean is better than
just how often it cleans.
A rational agent makes decisions based on:
1. The performance measure.
2. The agent’s prior knowledge about the environment.
3. The possible actions it can take.
4. The percept sequence—i.e., everything the agent has sensed so far.
A rational agent does not need to be perfect or always successful, but it should make the best decision with the
information it has. Rationality ≠ Omniscience.
An intelligent agent should learn from experience to improve its behavior. Over time, it can become more
independent from the knowledge programmed by its designer. This is called autonomy.
For example, a robot that learns where dirt usually appears will clean more efficiently than one that follows fixed
rules. In contrast, insects like the sphex wasp repeat the same steps even when the environment changes,
showing lack of learning and rationality.
Q3. Nature of Environments in AI
To build a rational agent, we must understand the task environments, which are essentially the “problems”
to which rational agents are the “solutions.
The nature of the task environment directly affects the appropriate design for the agent program.
This is described using the PEAS framework- The PEAS framework is used to specify the task environment
for an intelligent agent. It helps in designing and analyzing agents by clearly defining four components:
1. P – Performance Measure
o It defines the success criteria for the agent’s behavior.
o Example: For a self-driving car – safety, speed, passenger comfort, obeying traffic rules.
2. E – Environment
o The surroundings in which the agent operates and interacts.
o Example: Roads, traffic, pedestrians, and weather for a taxi agent.
3. A – Actuators
o The mechanisms through which the agent takes actions or affects the environment.
o Example: Steering wheel, accelerator, brakes in a taxi.
4. S – Sensors
o The devices that collect information from the environment.
o Example: Cameras, GPS, speedometer for a taxi agent.
Properties of Task Environments
Task environments define the surroundings and conditions under which an AI agent operates. Understanding their
properties helps in designing suitable agents. The main properties are:
1. Observable
o Fully Observable: The agent has complete access to all relevant information about the environment.
Example: Chess.
o Partially Observable: The agent only has partial information due to noisy or limited sensors.
Example: Taxi driving.
2. Agents
o Single-Agent: Only one agent is acting in the environment.
Example: Crossword puzzle.
o Multi-Agent: Multiple agents interact, either competitively or cooperatively.
Example: Poker.
3. Determinism
o Deterministic: The next state of the environment is completely predictable.
Example: Sudoku.
o Nondeterministic/Stochastic: Outcomes are uncertain or random.
Example: Real-world driving.
4. Episodic vs. Sequential
o Episodic: Agent’s actions are divided into independent episodes; past actions don’t affect the future.
Example: Image recognition.
o Sequential: Current actions influence future outcomes.
Example: Chess or taxi driving.
5. Static vs. Dynamic
o Static: The environment remains unchanged while the agent decides.
Example: Crossword.
o Dynamic: The environment can change while the agent is thinking.
Example: Real-time traffic or driving.
6. Discrete vs. Continuous
o Discrete: A limited number of distinct states and actions.
Example: Chess (finite board positions).
o Continuous: Infinite states and actions exist.
Example: Controlling a car’s steering.
7. Known vs. Unknown
o Known: The agent has full knowledge of how the environment behaves.
o Unknown: The agent must learn the rules through interaction.
Example: A new video game.
Q4. The Structure of Agents
1. The structure defines how an agent processes percepts and selects actions.
2. It includes the agent program, sensors, actuators, and internal components.
3. An agent consists of two main components:
Agent = Architecture + Program.
Types of Agent Structures-
Simple Reflex Agents
o These agents act solely on the current percept, ignoring the rest of the percept history.
o They use condition-action rules (e.g., "If square is dirty, then Suck").
o Suitable for simple environments but fail in complex or partially observable settings.
o Limitation: They can't deal with situations where the correct action depends on more than just the
current percept.
o Example: Automatic vacuum cleaner that cleans when it detects dirt.
Model-Based Reflex Agents
o Extend simple reflex agents by maintaining an internal state that stores information about the past.
o They require a model of the world—i.e., knowledge about how the environment evolves independently
and in response to actions.
o This allows the agent to make informed decisions even when parts of the environment are not directly
observable.
Goal-Based Agents
o These agents go beyond reflexes and internal state; they are driven by goals.
o A goal represents a desirable situation, and the agent takes actions to achieve it.
o They evaluate different possible actions and choose the one that leads toward the goal.
o Goal-based agents allow for decision making and planning.
o Example: Self-driving car that remembers the location of other vehicles and traffic signals.
Utility-Based Agents
o Sometimes, multiple goals may conflict or there may be multiple ways to achieve a goal. In such cases, a
utility function is used.
o A utility function assigns a numeric value (utility) to each possible state, measuring the agent’s level of
satisfaction.
o The agent selects actions that maximize expected utility, allowing for more flexible, rational decision-
making.
o This is especially important in uncertain or complex environments where trade-offs are necessary.
o Example: GPS navigation system that finds the best route to a destination.
Learning Agents
o A learning agent can improve its performance by learning from experience.
o It includes the following components:
Performance Element: Responsible for selecting actions.
Learning Element: Modifies and improves the performance element based on feedback.
Critic: Provides feedback by comparing agent performance with a standard.
Problem Generator: Suggests new experiences (exploratory actions) to learn from.
o This type of agent is highly adaptable and can operate effectively in unknown or changing environments.
o Example: Email spam filter that improves over time by learning from user feedback
Simple Reflex and model Model-Based Reflex
Feature Simple Reflex Agent Model-Based Agent
Decision Basis Acts only on the current percept Uses current percept + internal state (history)
Memory Has no memory of past percepts Maintains memory of past actions/percepts
Environment Works well in fully observable
Works well in partially observable environments
Suitability environments
Complexity Simple and easy to implement More complex due to internal state handling
May fail in dynamic or unseen
Accuracy More reliable in dynamic or changing environments
situations
Vacuum agent that sucks if square is Vacuum agent that remembers which squares it has
Example
dirty already cleaned