2
10
Artificial Intelligence
(AI)
AI isa branch of computer science which is
concerned with the study and creation of
computer systems that exhibit
INTELLIGENCE.
How to achieve intelligent behaviour
through computational means?
3.
What is ArtificialIntelligence
(AI)?
⢠Definition: AI refers to the simulation of human
intelligence in machines programmed to think and learn.
⢠Key Aspects:
⢠- Learning: Ability to improve performance over time
⢠-Reasoning: Making decisions based on available
information
⢠- Problem-Solving: Finding solutions to complex issues
⢠- Perception: Understanding sensory inputs like vision and
speech
4.
The Foundations ofAI
(core areas)
⢠Machine Learning: Algorithms that allow systems to learn from
data
⢠Neural Networks: Computational models inspired by the human
brain
⢠Natural Language Processing (NLP): Understanding and
generating human language
⢠Robotics: Designing and building robots to perform tasks
⢠Computer Vision: interpreting and analyzing visual data- images
and videos.
⢠Speech Processing: Speech processing is the study of speech
signals and how to analyze and manipulate them digitally.
5.
AI: Early Concepts
(AncientTimes to 19th Century)
⢠Mythology and Automata:
- The idea of creating artificial beings can be traced back to ancient myths
and early automata, such as mechanical birds or statues designed to move
in lifelike ways.
⢠Mathematical Foundations:
- In the 19th century, figures like George Boole and Ada Lovelace made
significant contributions. Boole developed Boolean algebra, which
underpins modern computer logic, and Lovelace is often considered the
first computer programmer for her work on Charles Babbageâs Analytical
Engine.
6.
Birth of AI(1950sâ1960s)
1950s:
⢠Alan Turing: Proposed the Turing Test in 1950 as a measure of a
machine's ability to exhibit intelligent behavior indistinguishable from a
human.
⢠Dartmouth Conference (1956): Widely considered the founding event of
AI as a field. John McCarthy, Marvin Minsky, Nathaniel Rochester, and
Claude Shannon organized the conference, and the term "artificial
intelligence" was coined.
Winter of AI(1970sâ1980s)
⢠1960s:
⢠Early Research: Researchers developed the first AI programs, such as ELIZA, a natural
language processing program by Joseph Weizenbaum, and SHRDLU, an early example
of a program capable of understanding and manipulating objects in a virtual world.
AI Winter (1970sâ1980s)
⢠1970s:
⢠Disillusionment: AI research faced setbacks due to high expectations and limited
computational resources. This period, known as the âAI Winter,â saw reduced funding
and interest.
⢠1980s:
⢠Expert Systems: There was a resurgence of interest in AI with the development of
expert systems, which used rule-based methods to emulate human expertise in
specific domains, such as medical diagnosis and financial forecasting.
9.
Revival and Growthof AI (1990s
â present)
⢠1990s:
⢠Machine Learning: AI research shifted towards machine learning and statistical
approaches. Significant milestones included IBMâs Deep Blue defeating world chess
champion Garry Kasparov in 1997.
⢠2000s:
⢠Data and Computing Power: The availability of large datasets and increased
computational power spurred advancements in AI. Technologies such as support
vector machines and neural networks began to show their potential.
10.
Modern AI Era(2010sâPresent)
2010s
⢠Deep Learning: Breakthroughs in deep learning, a subset of machine
learning using neural networks with many layers, led to significant
advancements. In 2012, a deep learning model by AlexNet won the
ImageNet competition by a large margin, showcasing the power of deep
learning in computer vision.
⢠AI in Everyday Life: AI systems began to permeate daily life with
applications like virtual assistants (e.g., Siri, Alexa), recommendation
algorithms, and advanced language models.
11.
2020s:
⢠Generative AI:Advances in generative models, such as OpenAIâs GPT
series, demonstrated remarkable capabilities in natural language
understanding and generation. AI's role in various fields, including
healthcare, finance, and creative industries, continued to expand.
⢠Ethics and Regulation: As AI technology advanced, discussions about
ethics, bias, and regulation gained prominence, focusing on ensuring
responsible development and use of AI systems.
12.
Timeline: History ofAI
⢠Early Developments:
⢠- 1950s: Alan Turingâs work and the Turing Test
⢠- 1956: Dartmouth Conference â Birth of AI as a field
⢠Growth and Setbacks:
⢠- 1960s-1970s: Early AI systems and optimism
⢠- 1980s: Expert systems and the AI Winter
⢠Modern Era:
⢠-2000s-Present: Rise of machine learning, big data, and deep
learning
13.
Further Reading:
⢠Evolutionof Artificial Intelligence:
⢠https://www.linkedin.com/pulse/evolution-artificial-intell
igence-from-dream-reality-shamim-hossain-kaqsc
14.
Applications of AI
EverydayApplications:
⢠- Personal Assistants: Siri, Alexa, Google Assistant
⢠- Recommendation Systems: Netflix, Amazon
⢠- Healthcare: Diagnostics, personalized medicine
Industry Applications:
⢠- Finance: Fraud detection, algorithmic trading, risk management
⢠- Manufacturing: Predictive maintenance, automation
⢠- Transportation: Self-driving cars, route optimization
15.
Types of ArtificialIntelligence
⢠Narrow AI
⢠General AI
⢠Superintelligent AI
16.
Narrow Artificial Intelligence
â˘Narrow AI also referred to as weak AI, is application- or task-specific AI.
It is programmed to perform singular tasks such as facial recognition,
speech recognition in voice assistants, or driving a car.
⢠Narrow AI simulates human behavior based on a limited set of
parameters, constraints, and contexts.
⢠Some of the common examples of Narrow AI include:
* speech and language recognition demonstrated by Siri on iPhones,
* the vision recognition feature showcased by self-driving cars and,
* recommendation systems such as Netflixâs recommendations that
suggest shows based on usersâ online activity.
18.
Applications and Limitationsof Narrow AI
⢠Narrow AI finds immense applications in various fields. The
limitations of narrow AI lie in its inability to generalise knowledge
across different domains.
⢠Sometimes, narrow AI can produce erroneous results if it
encounters data or scenarios outside its designated parameters.
⢠It needs to have the creativity and adaptability that general AI
offers.
19.
General Artificial Intelligence
â˘General AI, also known as strong AI or human-level AI, represents the
concept of machines that possess the ability to understand, learn, and
perform any intellectual task that a human can do.
⢠General AI aims to replicate human-level intelligence and reasoning.
⢠Although General AI hasnât been realized yet, it has drawn the attention
of top tech companies such as Microsoft, which invested $1 billion in
General AI through the venture OpenAI.
⢠Also, in an attempt to achieve strong AI, Fujitsu has built the K
computer, which is recognized as one of the fastest supercomputers in the
world.
⢠Similarly, Chinaâs National University of Defense Technology has built
Tianhe-2, a 33.86-petaflops supercomputer.
21.
Superintelligent Artificial Intelligence
â˘Superintelligent AI is a type of AI that surpasses human intelligence and
can perform any task better than a human.
⢠Superintelligent AI systems not only understand human sentiments and
experiences but can also evoke emotions, beliefs, and desires of their
own, similar to humans.
⢠Although the existence of Superintelligent AI is still hypothetical, the
decision-making and problem-solving capabilities of such systems are
expected to be far more superior to those of human beings.
⢠Typically, an Superintelligent AI system can think, solve puzzles, make
judgments, and take decisions independently.
23.
Machine Learning
⢠Machinelearning (ML) allows computers to learn and
make decisions without being explicitly programmed. It
involves feeding data into algorithms to identify patterns
and make predictions on new data.
⢠Machine learning is used in various applications, including
image and speech recognition, natural language processing,
and recommender systems.
Applications: Why dowe need Machine Learning?
⢠Machine Learning algorithm learns from data, train on patterns, and solve or
predict complex problems beyond the scope of traditional programming. It drives
better decision-making and tackles intricate challenges efficiently.
Need for Machine Learning:
1. Solving Complex Problems
⢠Traditional programming struggles with tasks like image recognition, natural
language processing (NLP), and medical diagnosis. ML, however, thrives by
learning from examples and making predictions without relying on predefined
rules.
Example Applications:
⢠Disease diagnosis, personalized medicine, drug discovery in healthcare.
⢠Language translation and sentiment analysis.
26.
2. Handling LargeVolumes of Data
⢠With the internetâs growth, the data generated daily is immense. ML effectively processes and
analyzes this data, extracting valuable insights and enabling real-time predictions.
Use Cases:
⢠Fraud detection in financial transactions.
⢠Social media platforms like Facebook and Instagram predicting personalized feed
recommendations from billions of interactions.
3. Automate Repetitive Tasks
⢠ML automates time-intensive and repetitive tasks with precision, reducing manual effort and
error-prone systems.
Examples:
⢠Email Filtering: Gmail uses ML to keep your inbox spam-free.
⢠Chatbots: ML-powered chatbots resolve common issues like order tracking and password resets.
⢠Data Processing: Automating large-scale invoice analysis for key insights.
28.
4. Personalized UserExperience (Recommendation Systems)
⢠ML enhances user experience by tailoring recommendations to individual preferences. Its
algorithms analyze user behavior to deliver highly relevant content.
Real-World Applications:
⢠Netflix: Suggests movies and TV shows based on viewing history.
⢠E-Commerce: Recommends products youâre likely to purchase (Amazon, Flipkart etc).
5. Self Improvement in Performance
⢠ML models evolve and improve with more data, making them smarter over time. They
adapt to user behavior and refine their performance.
Examples:
⢠Voice Assistants (e.g., Siri, Alexa): Learn user preferences, improve voice recognition,
and handle diverse accents.
⢠Search Engines: Refine ranking algorithms based on user interactions (Google).
⢠Self-Driving Cars: Enhance decision-making using millions of miles of data from
simulations and real-world driving.
29.
What Makes aMachine âLearnâ?
⢠A machine âlearnsâ by recognizing patterns and improving its performance on a task
based on data, without being explicitly programmed.
⢠The process involves:
⢠Data Input: Machines require data (e.g., text, images, numbers) to analyze.
⢠Algorithms: Algorithms process the data, finding patterns or relationships.
⢠Model Training: Machines learn by adjusting their parameters based on the input
data using mathematical models.
⢠Feedback Loop: The machine compares predictions to actual outcomes and corrects
errors (via optimization methods like gradient descent).
⢠Experience and Iteration: Repeating this process with more data improves the
machineâs accuracy over time.
30.
⢠Evaluation andGeneralization: The model is tested on unseen data to
ensure it performs well on real-world tasks.
⢠In essence, machines âlearnâ by continuously refining their understanding
through data-driven iterations, much like humans learn from experience.
31.
Importance of Datain Machine Learning
⢠Data is the foundation of machine learning (ML). Without quality data, ML
models cannot learn, perform, or make accurate predictions.
⢠Data provides the examples from which models learn patterns and
relationships.
⢠High-quality and diverse data improves model accuracy and generalization.
⢠Data ensures models understand real-world scenarios and adapt to practical
applications.
⢠Features derived from data are critical for training models.
⢠Separate datasets for validation and testing assess how well the model
performs on unseen data.
⢠Data fuels iterative improvements in ML models through feedback loops.