KEMBAR78
Module 5 | PDF | Artificial Intelligence | Intelligence (AI) & Semantics
0% found this document useful (0 votes)
28 views15 pages

Module 5

The document explains the relationship between Artificial Intelligence (AI) and Natural Intelligence (NI), emphasizing that AI is a tool that enhances human decision-making but cannot replace it. It outlines the importance of human involvement in AI processes to ensure ethical and accurate outcomes, as AI lacks common sense and contextual understanding. Additionally, it discusses the challenges AI faces, such as biases and the need for human oversight, particularly in complex and changing environments.

Uploaded by

layappa44lk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views15 pages

Module 5

The document explains the relationship between Artificial Intelligence (AI) and Natural Intelligence (NI), emphasizing that AI is a tool that enhances human decision-making but cannot replace it. It outlines the importance of human involvement in AI processes to ensure ethical and accurate outcomes, as AI lacks common sense and contextual understanding. Additionally, it discusses the challenges AI faces, such as biases and the need for human oversight, particularly in complex and changing environments.

Uploaded by

layappa44lk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

Module 5

1st ai comlements ni
---

## **AI Complements NI – Simple Explanation**

**AI (Artificial Intelligence)** is a tool that helps businesses by processing huge amounts of data using
computer power. It finds patterns in the data but **does not have feelings, emotions, or human
understanding**.

AI can only work with data that can be written as code. Even machine learning (ML) and deep learning (DL)
systems are designed by humans and follow coded instructions. These systems try to copy human thinking by
finding patterns, but they are not real thinking like humans do.

So, **AI can only support or improve Natural Intelligence (NI)** — which is human thinking — but it cannot
replace it. **AI has limits**, but these limits can be covered or balanced by NI.

---

### **What is NI (Natural Intelligence)?**

**Natural Intelligence (NI)** means human intelligence — our brain’s ability to learn from experience,
understand things deeply, and make smart decisions in complex situations.

AI systems do not have **common sense** like humans. They see patterns but don’t understand what those
patterns mean. For example:

* AI may see patterns in temperature data — whether it's about the weather or a factory machine — but **AI
doesn’t understand the difference** between these two things.

* AI gives results but cannot understand the real-world meaning or context unless a human explains it.

That’s why AI systems should always be **checked and controlled by humans**. One way is to design AI
models that ask many "what-if" questions — but even these need human understanding.

---

### **Why AI and NI are both important together:**

* AI is powerful when it helps humans, **not when it tries to replace human skills**.

* Human abilities like **creativity, leadership, and decision-making based on feelings and changing
situations** are still very important in business.

* AI needs humans to program its tasks based on real-world context — but **this context changes all the
time** (like customer moods or market needs), which only humans can truly understand.

2nd 1 explian with diagram known unknown matrix far AI


and NI
Automation: Hard, mono-dimensional data

Automation works best with simple and clear processes using basic data. Sensors collect data set by humans.

AI follows rules to find patterns in this data. Machines don’t understand their surroundings or the full

system—they just do specific tasks.

AI helps with repeated jobs, like chatbots and robots doing simple work faster and more accurately than

people. Machines "learn" from past actions and get faster with time. But they don’t truly think like humans.

Only well-defined, routine tasks can be automated. Humans are still needed to check and guide the results.

Experience: Soft, inter-disciplinary

AI helps predict outcomes, but humans add value through experience. Experience helps connect different

things, understand context, and solve problems in creative ways.

Prediction: Fuzzy, multidimensional data

Machines are fast at analyzing lots of data and finding patterns. They work with complex (multidimensional)

data to make predictions.

AI is useful when the situation is stable. But when things change, human experience (Natural Intelligence

NI) is better at making the right decisions.


Intuition

Humans have intuition—gut feelings from experience. Intuition helps solve problems in ways AI can’t. Artists,

doctors, and musicians often rely on it.

In business, decisions should combine AI's speed and NI’s deep insight. Humans bring creativity and common

sense that AI lacks.

Here’s a **simple and easy-to-understand version** of your provided text:

4th s. SUPERIMPOSING NI ON AI
---

## **Superimposing NI (Natural Intelligence) on AI (Artificial Intelligence)** — Easy Explanation

For any business or system to make **good and valuable decisions**, it is important to combine **AI with
human thinking (NI)**. When we carefully add **human intelligence at every step of the AI/ML process**, the
results become more accurate, ethical, and useful.

---

### **The AI/ML Process has 4 Main Steps:**

• Data collection: choosing the right kind of data for a given ML problem and filtering
the varied types of possible biases from the data •

ML: allocating the right kind of ML algorithm •

Prediction: opening the ML black box to explain causal relationships among inputs and
prediction •

Decision-making: fully engaging in decision-making


---

### **Why Human (NI) Involvement is Important:**

* Good decisions are not just about **data** — they must also be **ethical** (right and wrong), **safe**, and
**valuable for customers**.

* Humans think about **the effects of decisions on people, society, and business**, while AI cannot do this
alone.

* These human insights are added into the AI system, making the system better and more responsible.

---

### **How the AI System Learns Over Time:**

* At the start, **humans make most of the decisions**.

* Over time, **AI learns from human decisions** (this is called feedback).

* The system keeps improving by repeating this process:

**Learn → Correct → Relearn → Improve.**

* After many rounds, the AI can make better, fair, and valuable decisions — closer to what a human would do.

5th
Additional Challenges in AI Decision-Making – Easy Explanation
AI, especially Deep Learning (DL), faces many challenges when making decisions. That’s why Natural
Intelligence (NI) — human thinking — must be combined with AI to make better, safer, and more ethical
decisions.

1. Deep Learning (DL) Challenges:

Deep Learning (DL) helps AI find patterns in large data like voice, images, translations, self-driving cars, and
face recognition.

DL works like a human brain with layers and nodes that adjust to learn better (this process is called
backpropagation).

But there are problems:

DL does not understand real meaning or context like humans do.

Example: AI can know what a "bottle" is but doesn’t understand that a "cup" is similar unless taught separately.

DL requires huge amounts of data and computer power to learn.

DL is like a "black box" — its working process is so deep and complex that even humans can’t fully explain
why it makes certain predictions.

DL can break easily when situations change because it cannot adjust like humans who use common sense.
That’s why human intelligence (NI) is needed to guide, correct, and improve AI decisions — especially when
context keeps changing.

2. Ethical Challenges of AI based Decisions:

AI decisions can face ethical problems because AI:

Lacks understanding of human values and feelings.

Can make decisions that may harm people or businesses if not carefully controlled.

Example:

If AI makes all customer decisions without human check, it might ignore customer emotions, culture, or
changing needs — which can be bad for the business and society.

Main ethical risks:

AI systems use data collected by humans — and this data can be biased:

Example: If past data only shows men as doctors and women as nurses, AI might wrongly predict only men as
doctors in the future.

Bias can come from:

Sample bias (wrong data sample),

Measurement bias (errors in measuring),

Exclusion bias (leaving out important data),

Noise bias (random errors),

Prejudicial bias (personal views affecting data),

Accidental bias (mistakes).

Sure! Here’s an even shorter and simpler version:

3
---

4 Interfacing with Humans (Very Simple)**

AI systems need to interact well with humans. The way users see, hear, and touch the system is important. A
system that doesn’t change as the user learns may become hard to use.

For businesses, websites and apps should give a good user experience. Understanding why and how customers
use them helps improve AI systems.

User experience design makes sure the system is easy and useful for the customer at every step.

---

Let me know if you want it shorter or for a specific purpose (like notes)!

3rd
6thDecision–Action–Feedback Cycle (Easy Words)
In business, decisions are made step by step, again and again, to improve slowly. Natural Intelligence (NI) helps
in these steps by thinking about what will happen because of each decision.

Table 10.2 shows how this process helps improve automation and optimization over time. AI is used carefully
to support human decision-making and to learn from the results of each decision in a helpful feedback loop.
7thSAE Levels of Self-Driving Cars (

SAE Levels of Self-Driving Cars (Easy Words)

The Society of Automotive Engineers (SAE) has made a model that explains 6 levels of self-driving cars. These
levels go from Level 0 (fully manual) to Level 5 (fully automatic, no driver needed). This model is used all
over the world by car makers.
Level 0: No Automation

The driver does everything — steering, braking, and accelerating. Even if the car has things like automatic
brakes or automatic gears, the driver is still in full control.

Level 1: Driver Assistance

The car helps with only one task — like cruise control, which keeps the car at a steady speed and distance from
other cars. The driver controls everything else.

Level 2: Partial Automation

The car can control both steering and speed (brake, accelerate), but the driver must stay alert and ready to take
control anytime. Example: Tesla Autopilot, Volvo Pilot Assist.

Level 3: Conditional Automation

The car can handle some driving tasks by itself and make decisions like changing lanes or overtaking slow cars.
But the driver must be ready to take control if needed. Example: Audi A8, Honda Legend Sedan.

Level 4: High Automation

The car can drive on its own without human help in some places (like certain cities). It can handle problems or
failures itself, but humans can also take control if they want. These cars work only in special areas (called
geofencing).

Level 5: Full Automation

The car drives completely on its own everywhere, in any weather or road. No steering wheel or pedals are
needed. These cars are still being tested and are not yet sold to the public.

Let me know if you want this in table form or even shorter!

8th Benefits of Self-Driving (Autonomous) Cars


Sure! Here’s your entire text rewritten in very simple and easy words:

Benefits of Self-Driving (Autonomous) Cars – Easy Words


Self-driving cars (also called Autonomous Vehicles or AVs) will bring many benefits to people and society.
Even people who doubt this technology agree that these benefits are important. Some main benefits are:

1. Safety

Most road accidents today happen because of human mistakes like drunk driving, being distracted (like using
phones), or being too old to drive safely.

A study in the US showed that 94% of accidents are caused by human errors. Self-driving cars can reduce
these accidents because they don’t get drunk, tired, or distracted.

Also, safety systems like automatic braking or lane warnings (called ADAS) are already saving lives in
today’s cars. In the future, self-driving cars will have even more safety features to prevent accidents.

2. Less Traffic Jams (Congestion)

Traffic jams waste time, fuel, and cause stress. Self-driving cars will talk to each other to keep traffic moving
smoothly. This means:
✔️Less waiting in traffic.
✔️Less fuel use.
✔️Less pollution.
✔️Less stress for people.

3. Less Pollution

Today’s cars burn petrol or diesel, which pollutes the air. In the future, Level 5 self-driving cars will be
electric, with no fuel or smoke. No fuel-burning means cleaner air and a healthier environment.

4. Less Need for Parking Space

Right now, cars need parking spaces everywhere — offices, malls, homes. Self-driving cars can drop you at the
door and go away to pick up another passenger. This will free up land used for parking for parks, shops, or
other things.

5. Better Quality of Life

In self-driving cars, people can:


✔️Relax, watch movies, read, or work.
✔️Elderly and disabled people can travel easily without help.
✔️Parents can send their kids to school safely without driving.
Everyone becomes a passenger — no stress of driving!

6. Cost Savings
Self-driving cars are costly now, but in the future, they can save money by:
✔️Reducing accidents and insurance costs.
✔️Saving fuel.
✔️Reducing delivery costs.
✔️Making travel time shorter.

9th g. Analysis of the human driving cycle

Foreground Conscious Cycle (Easy Words)

Foreground Conscious Cycle (Easy Words)

When a person drives a car, their mind and body follow this cycle again and again. This cycle has 4 main steps:

1. Perception

The driver uses their eyes to see traffic lights, road signs, words on the road, and other vehicles. Mirrors in the
car and on the roadside also help the driver see things around.

2. Scene Generation

The brain receives all this information from the eyes and other senses. It creates a complete picture of what’s
happening around the car — where other cars, people, signals, and road signs are. The driver understands the
whole scene.

3. Planning

Based on this picture, the driver decides what to do next. For example — when the signal turns green, the driver
knows traffic will start moving and they must get ready to drive.

4. Action

The driver presses the gas pedal, releases the brake, and steers the car. After this, the cycle starts again — the
driver keeps seeing, thinking, planning, and acting the whole time they are driving.
Background Unconscious Cycle (Easy Words)

While the driver is actively thinking about driving, there is also another cycle happening in the background —
without the driver thinking much about it. This cycle also has 4 steps:

1. Information Filtering

When driving, the driver sees many things like shops, buildings, trees — but the brain ignores these
unimportant things so the driver can focus only on driving.

2. Risk Estimation

A good driver always looks for possible dangers — like a child standing near the road or a cyclist nearby. The
driver thinks, "What if the cyclist suddenly comes in front?" and gets ready for it.

3. Exception Handling

If something unexpected happens — like the cyclist moves wrongly or the child runs across — the driver can
quickly slow down or stop the car to stay safe. This is called handling exceptions or emergencies.

4. Performance Evaluation

After doing an action, the driver may think, "Did I miss something? Was I careful enough?" This helps the
driver to be more alert next time and improve their driving.
10th Self-Driving (AV) Car Cycle –
A self-driving car (Autonomous Vehicle or AV) also follows the same 4-step cycle as a human driver:
Perception, Scene Generation, Planning, and Action.

1. Perception

Self-driving cars use special sensors to "see" the world. These include:

Ultrasonic Sensors:
These send out sound waves (not heard by humans) to detect nearby objects like people or walls. They
work well even in bad weather.

Cameras:
These are the cheapest and most common sensors. Cameras are placed all around the car — front, back,
and sides. They help the car see traffic signs, lanes, signals, people, and other vehicles. But cameras
need good light to work properly.

Radar:
Uses radio waves to find out how far and fast objects (like cars) are moving. Radar works well for long
distances and helps in features like cruise control.

Lidar:
Uses laser light to make a 3D map of the area around the car. It is very accurate but expensive. Some
companies, like Tesla, are using other cheaper technologies instead.

GPS (Global Positioning System):


Shows the car’s location on the map using satellites. It works well in open areas but may not work inside
tunnels or underground.

2. Scene Generation

The car’s computer combines all the information from sensors to create a 3D view of the surroundings — roads,
cars, people, signs, etc. This helps the car understand what’s happening around it, similar to how a human brain
makes sense of what it sees.

3. Planning

The car’s powerful computer (like Nvidia Drive AGX PEGASUS) decides what to do — whether to speed up,
slow down, turn, or stop. These computers process a lot of data very fast, many times faster than normal
computers.

4. Action

The computer then sends commands to the car’s parts — like wheels, brakes, and steering — to perform the
action (move, stop, or turn). This whole cycle happens in milliseconds (one-thousandth of a second).

11th Humans vs. Self-Driving Cars


The way humans drive — planning, doing, and predicting — is very similar to how self-driving cars (AVs) are
made to work. These steps are turned into computer programs and put into the AV’s software. Self-driving cars
will keep learning and getting better by driving in the real world and in computer simulations.

Machines can make decisions in milliseconds, much faster than humans (who take a few seconds). So, in some
cases, AVs may react quicker and safer than humans.

But machines are not always better than humans. Why?

A study in the USA checked over 5,000 road accidents and grouped the mistakes into these five reasons:

Sensing and Seeing Mistakes


(Example: not seeing hazards or being distracted) — caused 23% of accidents.

Incapacitation
(Example: drunk driving, being sleepy, or sick) — caused 10% of accidents

Planning and Decision Mistakes


(Example: driving too fast or too close) — big reason for acciden

Execution and Performance Mistakes


(Example: wrong turns, bad handling of the car) — another big reason.

Predicting Mistakes
(Example: wrongly guessing what other cars or people will do).

Most accidents (two-thirds) happen because of planning, execution, or prediction mistakes, not because of
sensing or being sleepy/drunk.

AVs don’t get drunk, tired, or distracted. But this study says that even perfect AVs will only be able to stop
about one-third of accidents — the rest needs human thinking in very tricky or surprising situations.

Also, AVs cannot break the laws of nature — for example, if another car stops suddenly or an animal jumps in
front, accidents may still happen. Machines can fail too.

So, self-driving cars may not reduce accidents to zero. But they can make driving much safer than it is today,
where every year 37,000 people die and millions get injured in road accidents.

12 Unintended consequences of automated cars technology

1. Loss of Jobs

One of the most direct consequences of automated cars is the potential loss of jobs, especially for people
employed as professional drivers. This includes taxi drivers, truck drivers, delivery personnel, and ride-sharing
drivers (like those working for Uber or Ola). As self-driving cars become more common, companies may prefer
them over human drivers because they can operate continuously without rest, reducing labor costs. This shift
could result in widespread unemployment in sectors that heavily depend on human-driven transportation.

2. Blow to the Auto Industry


The traditional automobile industry could face disruption as demand shifts from privately-owned cars to shared
autonomous vehicle fleets. Fewer people may feel the need to buy personal vehicles when they can simply
summon a self-driving car when needed. As a result, car manufacturers may experience declining sales and
profits, forcing them to rethink their production models and business strategies, such as moving toward mobility
services rather than car sales.

3. Blow to the Auto Insurance Industry

Auto insurance companies may also suffer as automated vehicles reduce the number of accidents caused by
human error (which accounts for most road accidents today). With fewer claims and reduced risk, insurance
premiums are likely to fall. Moreover, liability might shift from individual drivers to car manufacturers or
software providers, changing the very structure of the insurance market. This shift could significantly reduce the
revenue streams of traditional auto insurers.

You might also like