Responsible AI &Ethics
Introduction – Module 1
Dr. Shiwani Gupta
HoD AI&ML
2.
Responsible AI: Ethics,Society & Humanity
•Transparency & explainability of AI decisions
•Fairness: avoiding bias & discrimination
•Accountability: who is responsible for AI
errors?
•Privacy & data protection
•Safety & reliability of AI systems
•Ethical design → “Do no harm” principle
•AI for inclusive growth (education, healthcare, accessibility)
•Reducing inequalities & digital divide
•AI in governance & public services (e.g., smart cities)
•Ethical use in law enforcement & surveillance
•Protecting jobs: reskilling & human-AI collaboration
•Ensuring AI aligns with social values & human rights
•AI aligned with long-term human well-being
•Balancing innovation with moral responsibility
•Human-centric AI → technology serving people, not replacing them
•Safeguards against misuse (autonomous weapons, deepfakes,
misinformation)
•Global cooperation for sustainable AI policies
•AI as a tool for solving humanity’s challenges (climate, health, poverty)
3.
What is ResponsibleAI?
• Responsible AI refers to the development and deployment
of AI systems in a way that ensures they are fair,
transparent, accountable, safe, and respectful of human
values and rights.
Core Principles
• Fairness → AI must avoid bias & discrimination.
• Accountability → Clear responsibility for AI outcomes.
• Transparency → Explainable decisions, not “black-box”
AI.
• Privacy & Security → Protection of personal and sensitive
data.
“Can you think of a real-world AI system where one of these principles was ignored? What happened as a result?”
(e.g., biased recruitment AI, facial recognition misuse, etc.)
4.
Case study –Microsoft Tay chatbot incident
Real Failure/Success?
Background:
• Launched by Microsoft in 2016 on Twitter.
• Designed to “learn” from conversations with users.
• Within 16 hours, Tay started posting racist, sexist, and offensive tweets.
Failure in Ethics → Tay revealed how quickly AI can adopt human biases.
Success in Learning → Highlighted the need for ethical guardrails, content filters, and
continuous monitoring.
“Do you think Tay was a failure or a success? If you were on Microsoft’s team, how would
you redesign Tay to prevent misuse?”
5.
Why Responsible AI?(risks: bias, misuse, black-box AI)
Key Risks:
• Bias → AI can reinforce gender, racial, or economic inequalities.
• Misuse → Deepfakes, surveillance, weaponization.
• Black-Box AI → Lack of transparency in decision-making.
Class Poll “Which of these risks worries you the most: Bias, Misuse, or
Black-box AI?”
Example:
• Bias in hiring AI (Amazon case) → Favored male candidates because of
biased training data.
“If you were the data scientist, how would you fix this bias?”
6.
Responsible AI principles(UNESCO, OECD, EU AI Act)
Common Themes Across Frameworks:
✅ Transparency – Explainable and traceable AI.
✅ Fairness & Non-discrimination – Avoid algorithmic bias.
✅ Accountability – Who is responsible when AI goes wrong?
✅ Human-centric Design – AI should serve humanity.
✅ Safety & Security – Protect against misuse and cyber threats.
✅ Privacy & Data Protection – GDPR alignment, consent-based usage.
• Mini Case
• Example: Facial Recognition in Public Spaces
• OECD → Focus on fairness.
• EU AI Act → Risk-based restrictions.
• UNESCO → Human rights protection.
“Which principle should dominate in this case?”
7.
Quick Poll
“Should AIhave the right to make decisions in hiring?”
•Pro-AI Group: Argue efficiency, reduced human bias, cost-saving.
•Anti-AI Group: Argue fairness, accountability, ethical risks.
“Who should be held accountable if AI unfairly rejects a candidate?”
8.
Video Clip suggestion– Ethics of AI (short TED talk)
• https://youtu.be/eXdVDhOGqoE
• https://youtu.be/9DXm54ZkSiU
•“What single insight stood out the most from the talk?”
•“Did it reinforce or challenge what you already believed?”
9.
Compare “Responsible AIvs. Unethical AI” with examples
Responsible AI Unethical AI
Transparency Algorithms are explainable
(e.g., healthcare AI
explaining diagnosis)
Black-box credit scoring
system denying loans
without reasons
Fairness & Non-
Discrimination
Recruitment AI tested for
gender neutrality
Hiring AI showing bias
against women/minorities
Privacy & Data
Protection
Face recognition used with
consent in airports
Surveillance AI tracking
citizens without approval
Accountability Self-driving car logs all
decisions for auditing
Autonomous vehicle hides
error data after accidents
Social Good vs.
Exploitation
AI predicting crop yield
for farmers
Deepfake AI used for
political misinformation
“Which area (transparency, fairness, privacy, accountability, or
social impact) do you think is the most urgent for Responsible AI?”
10.
“Which of theseis NOT a principle of Responsible AI?”
A) Fairness and non-discrimination
B) Transparency and explainability
C) Human oversight and accountability
D) Maximizing user engagement at any cost
E) Collecting as much data as possible without consent
F) Keeping models intentionally opaque
D) Maximizing user engagement at any cost
Which principle feels hardest to uphold in practice and why?
11.
Morality & Ethicsin AI
• Morality = personal beliefs about right/wrong.
• Ethics = agreed standards for groups/professions.
12.
Ethical frameworks inAI
Utilitarianism, Deontology, Virtue ethics
• Utilitarianism: Choose the action that maximizes overall good (benefits minus harms) for the greatest number.
• Deontology: Follow duties, rights, and rules—some actions are wrong regardless of outcomes.
• Virtue ethics: Act as a virtuous agent would—cultivate character traits like fairness, honesty, courage.
• Utilitarianism: Who are all the stakeholders? What are short/long-term benefits and harms? What option yields the highest net
positive impact?
• Deontology: What rights could be violated? What rules, laws, or policies apply? Would I accept this as a universal rule if
everyone did it?
• Virtue ethics: What would a wise, just, and compassionate practitioner do? Does this align with our organization’s virtues and
culture?
• Utilitarianism
• Strengths: Outcome-focused, scalable to population-level impacts, useful for trade-offs.
• Risks: Can justify harming minorities; hard to quantify intangible harms; “ends justify means” pitfalls.
• Deontology
• Strengths: Protects rights and dignity; clear constraints; good for compliance and safety.
• Risks: Can be rigid; rule conflicts; may ignore beneficial outcomes.
• Virtue ethics
• Strengths: Promotes trustworthy culture; good for ambiguous contexts; supports long-term integrity.
• Risks: Vague/subjective; depends on role models and norms; hard to operationalize.
13.
Applying to theAI lifecycle
Utilitarianism Deontology Virtue
Problem framing Define social benefit and measurable
outcomes; map affected groups
Check legal/rights constraints (privacy,
consent, non-discrimination).
Ensure intent aligns with
organizational values; avoid
manipulative problem statements.
Data Evaluate dataset utility vs. collection
risks; minimize harm.
Lawful basis, consent, data minimization,
purpose limitation.
Respect data subjects; avoid
exploitative sources.
Modeling Optimize for aggregate welfare metrics;
run impact assessments.
Hard constraints (e.g., no use of protected
attributes unless for fairness correction).
Prefer interpretable models where
stakes are high; practice humility
about limitations.
Evaluation Cost–benefit simulations, scenario
stress tests across populations.
Red-line tests (rights violations, safety
thresholds).
Peer review, ethical retrospectives,
candor in reporting failures.
Deployment Phased rollouts; monitor real-world
harm/benefit ratios.
User rights (appeal, explanation), human
oversight, opt-out.
Transparent communication;
accountability when issues arise.
Governance Adjust policies based on impact metrics. Audit trails, incident reporting, compliance
reviews.
Reward ethical behavior; cultivate
whistleblowing safety.
14.
Dimension Utilitarianism DeontologyVirtue ethics
Core aim Maximize overall good Respect duties/rights Cultivate good character
Decision style Consequences Rules/constraints Practical wisdom
Guardrails Harm–benefit thresholds Red lines (do-not-cross) Cultural norms/virtues
Risk Majority tyranny Rigidity Vagueness
Best for Trade-offs at scale Safety, rights, compliance
Ambiguity, culture-
building
Example: Allocating
GPU time for a medical
AI
Prioritize projects with
highest expected QALYs
saved and broad access;
quantify opportunity
costs.
Ensure no exclusion based
on protected traits; honor
informed consent for data;
follow medical device
rules.
Favor transparent teams
with strong safety culture;
choose openness over
secrecy when publishing
limits.
• List stakeholders and impacts; estimate net benefit and uncertainties.
• Identify applicable rights, laws, and hard constraints; set red lines.
• State which virtues guide this decision; document reasoning and trade-offs.
• Run a pre-mortem: how could this cause harm despite good intentions?
• Set monitoring, escalation, and rollback plans before launch.
15.
Example – Self-drivingcars & Trolley Problem
The Trolley Problem is a classic ethical thought experiment that has been widely discussed in the context of self-driving cars. It typically involves
a scenario where an autonomous vehicle must make a decision in an unavoidable collision situation—such as choosing to sacrifice one person to
save several others.
Key points in the example of self-driving cars and the Trolley Problem include:
• Unavoidable Collision Scenario: The Trolley Problem framed for autonomous vehicles asks whether the car should prioritize the lives of its
passengers or pedestrians when a collision is unavoidable. For example, should the car swerve to kill one pedestrian or stay on course and kill
multiple passengers? This mirrors the classic dilemma of diverting a runaway trolley to kill one person instead of many
• Moral Norms and Intentionality: The problem raises the ethical question of whether it is acceptable to intentionally cause harm to one
individual to save others, which violates certain moral norms. It tests the ethics of action versus inaction in life-and-death decisions
• Real-World Applicability Debate: Some argue that framing self-driving car accidents as a trolley problem oversimplifies real-world situations.
They suggest that many incidents are accidents rather than deliberate ethical choices, and that the technology's decision-making is more
complex, involving braking dynamics, traction, and risk assessment rather than straightforward moral calculations
• Technological Capabilities: Autonomous vehicles may have advanced braking and control capabilities that differ from human drivers,
potentially reducing the frequency or severity of such dilemmas. This implies that the Trolley Problem may not fully capture how self-driving
cars operate in practice
• Ethical Programming Challenges: Developers face the challenge of programming cars to make decisions in moral dilemmas, which involves
choosing the "lesser evil," such as minimizing total harm, yet this raises complex questions about whose lives are prioritized
16.
“Should AI prioritizesaving young lives over old?”
Ethical Dilemma
Key insights on this issue include:
• Public Preferences: Research indicates that many people tend to prefer saving younger lives over older ones when AI systems are faced
with life-and-death decisions. For example, a study found that people generally believe driverless cars should prioritize saving young
people rather than the elderly.
• Ethical Considerations: The dilemma reflects competing values—prioritizing the young may be seen as maximizing potential years of
life saved, while prioritizing fairness and equality might argue against age-based discrimination. This raises questions about ageism and
whether AI systems should embed such biases or seek to avoid them.
• Complexity of AI Decision-Making: Programming AI to make such decisions involves challenging ethical trade-offs. Should AI always
opt for the path that causes the least harm overall or incorporate other moral principles such as fairness and respect for all ages? These are
unresolved questions in AI ethics.
• Social Impact: Beyond immediate life-and-death scenarios, AI has the potential to improve the lives of older adults in many ways, such as
reducing social isolation and enabling better health management, underscoring the importance of designing AI systems that respect and
support all age groups.
While there is a tendency among people to prioritize young lives in ethical AI decision-making scenarios, this raises complex moral questions
about fairness and ageism. The design of AI systems must carefully consider these issues to avoid reinforcing societal biases and to respect
the dignity of individuals across all ages.
17.
AI moral dilemmasin healthcare (life support decision)
AI applications in healthcare, particularly in making life support decisions, introduce complex moral dilemmas that intersect with ethics, patient rights, and medical
judgment.
Here are some key points regarding these dilemmas:
• Ethical Decision-Making Support: AI systems are increasingly used to assist clinicians in making difficult life support decisions, such as whether to continue or
withdraw treatment. These systems can analyze large datasets to predict outcomes and suggest interventions, potentially improving decision accuracy.
• Respecting Patient Preferences: A significant ethical challenge is ensuring that AI respects patients' values, preferences, and individual life situations. AI may lack
the nuanced understanding of a patient's personality or wishes, leading to decisions that conflict with what patients or their families want.
• Transparency and Consent: AI decision-making must be transparent to both clinicians and patients. Patients should give informed consent for AI involvement in
their care, but the complexity of AI algorithms can make this difficult, raising concerns about autonomy and trust.
• Bias and Fairness: Data used to train AI models may contain biases, which can lead to unfair or discriminatory decisions about who receives life support. This
raises justice concerns, especially for vulnerable populations who might be disproportionately affected.
• Determining Capacity and Best Interventions: AI may help identify when a patient lacks decision-making capacity and recommend interventions to restore it,
potentially aiding ethical decision-making about life support.
• Balancing Clinical Judgment and AI Recommendations: Ultimately, AI should support rather than replace human clinicians. Physicians must interpret AI outputs
critically and consider ethical implications, ensuring decisions align with both medical standards and ethical principles.
While AI has the potential to enhance life support decision-making by providing data-driven insights, it also raises moral dilemmas around patient autonomy, fairness,
transparency, and the risk of biased outcomes. Careful ethical guidance and oversight are essential when integrating AI into these critical healthcare decisions.
18.
Ethics in Surveillance& Facial recognition
(privacy concerns)
1. Privacy Concerns: Unauthorized data collection, Mass surveillance, Lack of consent
2. Bias & Fairness: Algorithmic bias, Discrimination against minorities, Unequal accuracy
rates (gender, ethnicity)
3. Transparency & Accountability: Black-box AI systems, Lack of explainability,
Responsibility for errors
4. Security & Misuse: Data breaches, Misuse by governments or corporations, Hacking risks
5. Legal & Regulatory Issues: Absence of clear laws, Cross-border regulations, Enforcement
challenges
6. Social Impact: Public trust erosion, Psychological effects of being watched, Impact on
freedom of movement and speech
19.
Case Study –Cambridge Analytica & Data Ethics
Real Failure/Success?
Background: Cambridge Analytica, a political consulting firm, harvested data from millions of Facebook users without proper consent.
The data was used to profile voters and target political advertising (notably in the 2016 US elections and Brexit campaigns).
Ethical Issues
Privacy Violation – Personal data was collected without informed consent.
Manipulation of Behavior – Micro-targeted ads influenced political opinions.
Transparency – Lack of disclosure about data collection and use.
Accountability – Ambiguity over who should be held responsible (Facebook, Cambridge Analytica, political groups?).
Consequences (Failure Side)
Public trust in social media companies dropped.
Facebook faced investigations, fines, and reputational damage.
Cambridge Analytica eventually shut down.
Sparked global debates on data ethics, AI fairness, and surveillance capitalism.
Learning Outcomes (Success Side)
Raised awareness of data ethics worldwide.
Led to stronger data protection laws (e.g., GDPR enforcement in Europe).
Sparked academic research and policy discussions on responsible AI and data governance.
Provided a teaching example of what not to do in data handling.
20.
Ethical issues vs.AI examples
Ethical Issue AI Example
Privacy Unauthorized data collection, surveillance, consent. Facial recognition in public spaces, smart assistants
recording conversations.
Bias & Fairness Discrimination due to biased datasets or algorithms. Hiring algorithms preferring certain genders/races,
predictive policing tools.
Transparency “Black-box” models that make decisions without
explanation.
Deep learning models in healthcare predicting risks
without explainability.
Accountability Who is responsible when AI makes a harmful or
incorrect decision?
Self-driving car accidents — liability of car
manufacturer vs. programmer vs. user.
Security & Misuse Risk of hacking, malicious use, weaponization of
AI.
Deepfakes used for misinformation, autonomous
drones in warfare.
Social Impact Effect on employment, human dignity, trust, and
freedom.
Job losses due to automation, psychological effects
of constant surveillance.
21.
Need for Ethicsin AI
Why ethics matters?
1. Bias
AI systems learn from historical data, which often reflects existing social prejudices.
If unchecked, AI amplifies bias, e.g., facial recognition misidentifying people of certain ethnicities.
2. Discrimination
When AI biases translate into action, they can unfairly disadvantage groups.
Example: Biased hiring algorithms rejecting women or minority candidates.
3. Job Loss
Automation and AI-driven decision-making threaten traditional jobs.
Ethical concern: Who supports displaced workers? How can reskilling be ensured?
4. Inequality
AI benefits are often concentrated in wealthy corporations/countries.
Creates a digital divide, widening the gap between those with AI access and those without.
22.
Real-world failure
Amazon AIrecruitment tool biased against women
• In 2014–2015, Amazon’s engineering team developed an AI system to
automatically review and rate through job applicants’ résumés using a 1–5 star
scale The GuardianeuronewsBusiness Insider.
• This system was trained on 10 years of historical résumé data—predominantly
submitted by men since the tech industry was male-dominated The
GuardianAmerican Civil Liberties UnionMIT Technology Review.
• As a result, the tool penalized résumés containing the word “women’s” (like
“women’s chess club captain”) and unjustly downgraded graduates from all-
women’s colleges The GuardianAmerican Civil Liberties UnionIT Pro.
• Although Amazon attempted to mitigate these issues, they couldn’t ensure
unbiased results across all patterns. Hence, by around 2015, they decided to
abandon the project entirely
23.
Concern Details
Training DataBias Using overwhelmingly male résumés ingrained gender preference into the AI.
Unintended
Discrimination
AI learned to penalize specific words or demographics, rather than job fitness.
Lacking Explainability No transparency on how decisions were made, making bias harder to detect.
Ineffective Fixes Even after tweaks, there was no guarantee that new biased patterns wouldn’t emerge.
Ethical Oversight
Amazon halted the project when flaws couldn’t be resolved, showing ethical
accountability.
Key Lessons & Ethical Issues
“Was the failure of Amazon's AI recruiting system simply a technical hiccup, or does it point to deeper issues in
How we design AI systems?”
Discuss how ethical design, data diversity, and human oversight are essential to prevent such failure modes in
AI.
https://www.axios.com/2018/10/10/amazon-ai-recruiter-favored-men?utm_source=chatgpt.com
24.
Bias in datasets(example: Facial recognition accuracy
gap)
• Several studies (notably by MIT Media Lab, Joy Buolamwini’s “Gender Shades”
project, 2018) showed that commercial facial recognition systems from IBM,
Microsoft, and Face++ had much higher accuracy for lighter-skinned male faces
(up to 99%) compared to darker-skinned female faces (as low as 65%).
• This gap happened because training datasets were dominated by images of lighter-
skinned men, meaning the AI didn’t “learn” equally well for other demographics.
Risks of Dataset Bias
• Discrimination: Misidentification of women or people of color (e.g., wrongful
arrests, denial of services).
• Reinforced Inequality: If used in hiring, policing, or border control, errors
disproportionately affect marginalized groups.
• Loss of Trust: Public backlash and regulatory scrutiny against biased AI systems.
Interactive Poll
“Do youtrust AI doctors more than human doctors?”
Discussion Prompts
• What advantages do AI doctors bring? (e.g., speed, data-driven accuracy, reduced human fatigue)
• What are the risks of bias in medical datasets? (e.g., misdiagnosis for underrepresented groups)
• Can AI doctors be trusted without human oversight?
Connect to Bias Example
👉 “If AI can misclassify faces due to poor datasets, imagine the risks if biased datasets affect medical diagnosis.”
27.
Regulations & Governance
EUAI Act, India’s Niti Aayog AI strategy
• EU AI Act (Europe’s Approach)
• Risk-based framework: Classifies AI into Unacceptable Risk, High Risk,
Limited Risk, and Minimal Risk.
• Examples:
• Unacceptable → Social scoring (like China’s credit system).
• High → AI in healthcare, hiring, law enforcement.
• Limited → Chatbots (must disclose they’re AI).
• Minimal → Games, spam filters.
• Key Rule: High-risk AI requires strict testing, transparency, and human
oversight.
Should facial recognition in public spaces be “high risk” or “unacceptable risk”?
28.
India – NITIAayog’s AI Strategy
• Published as “#AIforAll” strategy.
Focus Areas: Healthcare, Agriculture, Education, Smart Cities, Mobility.
• Ethics & Governance:
• Stress on responsible AI → transparency, accountability, inclusiveness.
• Partnership with private + academic institutions.
• India doesn’t yet have a binding law like EU, but moving towards AI ethics frameworks &
sector-specific guidelines.
“India is focusing on AI for development (healthcare, agriculture). Should it copy EU’s strict
regulation, or have flexible rules?”
29.
Aspect EU AIAct (Strict)
India NITI Aayog
(Flexible)
Approach Risk-based law Policy + strategy
Priority Protect citizens Growth & inclusion
Enforceability Legally binding Advisory, evolving
Example Ban social scoring Promote AI in healthcare
Comparison
After teaching, conduct a mini-debate:
• Team A argues for strict regulation like EU.
• Team B argues for flexible innovation-first approach like India.
30.
Explainable AI (XAI)
Whatis XAI?
• AI models are often black boxes (e.g., deep neural networks).
• XAI aims to make AI’s decisions understandable to humans.
Why XAI Matters?
• Healthcare: Doctor needs to know why AI recommends a treatment.
• Finance: Regulator asks why AI denied a loan.
• Legal/Compliance: EU AI Act requires explanation of high-risk AI.
Techniques in XAI
• Model-specific (white-box): Decision Trees, Linear Models.
• Post-hoc explanations (for black-box models):
• LIME (Local Interpretable Model-agnostic Explanations)
• SHAP (Shapley Additive Explanations)
• Counterfactual Explanations (What if you had more income?)
• Input Data: Customer profile, patient record, etc.
• AI Model: Complex ML/Deep learning system.
• Output: Prediction (loan approved, disease detected).
• XAI Layer: Explains WHY — e.g., “Loan denied because low credit score + high debt.”
Would you prefer a 95% accurate AI model with no explanation, or an 85% accurate one that clearly explains decisions?
31.
Students pick anAI app & identify possible ethical risks
“Think of an AI application you use daily (examples: Chatbots, Google Maps,
TikTok, Amazon Alexa, Face Unlock, AI Doctors).”
Each student (or group) must Choose an AI app, List 2–3 possible ethical risks.
Example risks:
• Privacy (Does it collect too much personal data?)
• Bias (Does it treat some groups unfairly?)
• Transparency (Do we know how decisions are made?)
• Job Displacement (Could it replace human workers?)
• Ask students to share apps & risks.
• Write categories on board: Privacy | Fairness | Safety | Transparency.
• Place each example under one or more categories.
“Every AI system comes with benefits AND ethical challenges. Responsible AI
means identifying risks early and addressing them.”
32.
Group Brainstorm
“How wouldyou design ethical AI for credit scoring?”
1) Define purpose, scope, and guardrails
• Objective: predict probability of default (PD) to support lending
decisions—not to replace them.
• Non-goals: no inference of protected traits (gender, race, religion, etc.).
• Decision policy: how scores map to approve/decline/review. Write this
down up front.
2) Governance & roles
• Accountable owner (risk/credit head), model risk (validation), privacy
(DPO), ethics (review board), engineers, legal/compliance.
• Human-in-the-loop: manual review path for borderline cases; an appeals
process for customers.
• Documentation: data sheet + model card, decision policy, monitoring plan,
incident playbook.
3) Data ethics & privacy
• Sources: bureau data, internal repayment history, KYC, income,
employment, alternative data only if consented and demonstrably predictive.
• Minimize & justify: keep only features with clear creditworthiness
rationale (e.g., utilization, DTI, repayment history). Avoid proxies for
protected attributes (ZIP alone can be proxy).
• Quality & bias checks: missingness by group, outliers, sample imbalance,
temporal leakage.
• Privacy: consent, purpose limitation, encryption, access control, deletion
SLAs; if sharing, use anonymization/pseudonymization. Consider DP
(differential privacy) for aggregates.
4) Feature engineering with fairness in mind
• Prefer interpretable, monotonic features (e.g., “more delinquencies → riskier”).
• Avoid “behavioral creep” features that are hard to justify (e.g., app scroll speed).
• Correlation scan: drop/transform features highly correlated with protected traits.
• Use reject inference carefully (to reduce approve bias) with transparent
methodology.
5) Model choices
• Start simple & explainable: scorecards / logistic regression with monotonic
constraints or interpretable trees.
• If using complex models (GBMs), enforce monotonicity, feature caps, and pair
with post-hoc explainability (e.g., SHAP) + stability tests.
• Calibrate probabilities (Platt/Isotonic) to make thresholds meaningful.
6) Fairness objectives & metrics (select before training)
Choose one or two primary criteria and track the rest:
• Equal Opportunity: similar TPR (approval among truly good borrowers) across
groups.
• Equalized Odds: similar TPR/FPR across groups (often stricter).
• Calibration within groups: same PD means same default risk for everyone.
• Adverse impact ratio (selection rate): monitor for disparate impact.
• Reject-option classification: allow overrides for ambiguous cases.
Tip: Pick metrics aligned to your law/regulator and business context; state trade-
offs explicitly
33.
Group Brainstorm
“How wouldyou design ethical AI for credit scoring?”
7) Fairness-aware training & thresholding
• Try pre-processing (reweighing, sampling), in-processing
(regularizers/constraints), or post-processing (group-wise
thresholds) depending on policy and legal advice.
• Validate that any group-specific thresholds are legally
permissible in your jurisdiction.
8) Validation beyond accuracy
• Technical: ROC-AUC/PR-AUC, Brier score, calibration
curves, stability across time slices.
• Robustness: stress tests (downturn scenarios), distribution shift,
missing data.
• Fairness: report chosen metrics by protected groups +
confidence intervals.
• Explainability: global (feature importance, monotone checks),
local (per-decision SHAP), counterfactuals (“What minimal
changes would flip the decision?”).
9) Decisioning & explanations to customers
• Generate adverse action notices with plain-language reasons
(“High utilization, recent delinquencies, short history”). Limit to
top N actionable factors.
• Provide appeal workflow and recourse guide (e.g., “Reduce
utilization below 30%”).
• Avoid exposing proprietary details; focus on meaningful,
truthful explanations.
Ethical credit scoring is not a single algorithm—it’s a system:
principled data, interpretable models, strong governance, clear
customer recourse, and relentless monitoring. Build all of those,
and your model can be both profitable and fair.
10) Deployment controls
• Shadow mode first; compare to incumbent policy.
• Guardrails: hard caps (e.g., never decline solely on thin-file), business rules for
minimum income/age/KYC pass.
• Rate-limiting & audit logs of every score + features used.
11) Continuous monitoring
• Performance drift: AUC, calibration, approval/decline mix.
• Fairness drift: primary metrics by group monthly; trigger thresholds → review.
• Data drift: PSI/KS on key features.
• Stability: score volatility for unchanged profiles.
• Periodic model risk re-validation and bias audits; retrain with refreshed windows.
12) Red-team & impact assessment
• Conduct a model red-team: look for proxy discrimination, gaming (e.g., manipulating
features), worst-case scenarios.
• Run an AI Impact Assessment: enumerate harms, affected populations, mitigations.
34.
Quiz: Which areafaces the highest ethical risk from AI?
High-stakes decision-making systems — like credit scoring, hiring, facial
recognition, predictive policing, and healthcare diagnostics.
• These areas pose the greatest ethical risks because:
• They directly affect people’s livelihood, rights, and opportunities.
• Errors or biases can cause discrimination and systemic unfairness.
• Transparency, explainability, and recourse are often missing.
• Once deployed, biased decisions can amplify inequalities at scale.
AI in decision-making that impacts human rights and opportunities (e.g.,
finance, hiring, law enforcement, healthcare).
35.
AI for Society& Humanity Positive AI applications
(education, healthcare, accessibility, climate)
Education
• Personalized learning platforms adapt to each student’s pace.
• AI tutors give instant feedback and support remote learning.
Healthcare
• Early diagnosis through medical imaging (e.g., detecting cancer).
• Predictive analytics for patient risk monitoring.
• AI-powered drug discovery speeding up treatments.
Accessibility
• Speech-to-text & text-to-speech for differently-abled users.
• AI-powered vision tools for the visually impaired.
• Real-time translation breaking language barriers.
Climate & Environment
• AI models predict climate patterns and disasters.
• Smart grids optimize renewable energy use.
• AI helps in precision agriculture for sustainable farming.
“AI, when developed responsibly, can drive social good—improving education, saving lives, empowering accessibility, and
fighting climate change.”
36.
AI for Goodinitiatives (Brainstorm)
UN projects, AI4ALL, AI for Earth by Microsoft
United Nations – AI for Good
• UN-led platform connecting AI innovators & policymakers.
• Focus on sustainable development goals (SDGs).
• Projects in health, disaster response, poverty reduction.
AI4ALL
• Non-profit organization promoting inclusive AI education.
• Works to increase diversity in AI (women, minorities, underrepresented groups).
• Runs summer camps, mentorship, and open learning programs.
Microsoft AI for Earth
• Grants and tools for climate action.
• Supports projects in agriculture, biodiversity, water, and climate change.
• Provides cloud & AI resources to NGOs, researchers, and startups.
“AI can be a force for good—when aligned with human values, it supports education, equity, and sustainability globally.”
• Case Study Example: Wildlife Protection with AI
• AI-powered drones & camera traps monitor endangered species.
• Algorithms detect poachers in real-time and alert rangers.
• Supported by Microsoft AI for Earth → reducing illegal hunting.
37.
Video Clip suggestion
AIin Healthcare for Blind Assistance
https://youtu.be/DybczED-GKE Microsoft’s Seeing AI app (It narrates the world for visually impaired people → recognizing text, objects, people, and scenes)
Alternative Options - Google Lookout App (Android) – real-time object and text recognition, Be My Eyes (with AI) – connects blind users to volunteers or AI
assistants.
• How does computer vision + NLP enable this?
• What are the ethical implications (privacy, data bias, accessibility)?
• Could this be expanded to education or workplaces?
“AI can transform healthcare accessibility, giving independence to millions of visually impaired individuals.”
Shows how Microsoft’s Seeing AI app narrates surroundings—objects, people, text—making the world more accessible. YouTubeTIME
• What ethical considerations arise from using AI for visual assistance? (e.g., privacy, data bias, consent)
• How can we ensure fairness and safety across diverse users?
Reflection Prompt:
• What struck you most about how Seeing AI works?
• How might this technology empower or pose risks for users?
Discussion Points:
• Data bias: Are all skin tones and lighting conditions handled equally?
• Privacy: How is visual data processed or shared?
• Inclusivity: Does the app work across languages and contexts?
38.
Case Study
AI fordisaster response (Google Flood Forecasting in India)
Real Failure/Success?
Google Flood Forecasting in India
Objective: Predict river floods and send alerts via Google Search, Maps, and Android notifications.
Approach: Uses satellite data, hydrological models, and ML to simulate river behavior.
Coverage: By 2022, extended to cover 360 million people globally, including much of India and Bangladesh.
Successes
Early Warnings: Millions of alerts sent; many communities evacuated in time.
Local Partnerships: Collaboration with India’s Central Water Commission improved trust and adoption.
Scalability: The system expanded to more regions after pilot success in Patna (2018).
Limitations / Failures
Accuracy Issues: Predictions are not always precise (location-specific floods missed).
Accessibility Gap: Many rural populations didn’t receive alerts due to language barriers, lack of internet/smartphones.
Over-reliance on Tech: Some communities ignored alerts due to mistrust in digital warnings compared to local experience.
“AI saved lives in many cases, but failed in others. Should AI-based disaster tools be seen as a replacement for traditional systems, or only as a supplement?”
https://www.youtube.com/watch?v=JhHMJCUmq28&utm_source=chatgpt.com
39.
Interactive Activity
Students designan AI project for social good
Form Teams: 3–4 students per group.
Choose a Problem Area (examples): Healthcare (e.g., early disease detection for rural areas), Environment (e.g., waste
segregation, flood prediction), Accessibility (e.g., AI tools for visually/hearing impaired), Education (e.g., personalized tutoring
for underprivileged students), Agriculture (e.g., crop disease detection, smart irrigation)
Project Pitch (5–7 mins per team):
Problem Statement (What issue are you solving?)
AI Approach (Which techniques/data might help?)
Target Users/Beneficiaries (Who gains from this solution?)
Potential Challenges (Bias, data availability, ethics, cost)
Expected Social Impact (How will lives improve?)
A poster/slide summarizing their idea
A 2-minute elevator pitch to present to the class
Evaluation Criteria
✅ Creativity & Innovation
✅ Feasibility of AI application
✅ Consideration of ethics & bias
✅ Clarity of social impact
Example Starter Ideas (to inspire students)
AI Sign Language Translator (for hearing-impaired communication)
AI-powered Disaster Relief Bot (for resource allocation during floods/earthquakes)
AI for Mental Health (detecting stress levels from speech/text to provide help)
AI Waste Classifier (image-based system for recycling & waste management)
40.
AI in Education(adaptive learning, language translation
for inclusion)
Adaptive Learning Systems
AI analyzes student performance and personalizes learning paths.
Examples:
Platforms like Knewton, Coursera, Byju’s that adapt questions/content difficulty.
Early alerts for struggling students → targeted teacher support.
Benefit: Improves learning outcomes and keeps students engaged at their pace.
Language Translation for Inclusion
AI-powered translation tools (Google Translate, Duolingo, DeepL).
Speech-to-text & real-time captioning help students with hearing impairment.
Breaking language barriers in multicultural classrooms.
Benefit: Ensures equitable access to knowledge for non-native speakers & differently-abled learners.
Real-World Impact
UNESCO & EdTech companies collaborate to use AI for universal access to education.
Microsoft’s Immersive Reader helps dyslexic students with personalized reading support.
AI chatbots act as 24×7 learning assistants.
“If you had to design an AI tool for your classroom, what challenge would you solve—personalized learning,
accessibility, or language inclusion?”
41.
AI in Accessibility(speech-to-text, brainwave interfaces
for impaired)
AI in Accessibility: Empowering People with Disabilities
1. Speech-to-Text & Text-to-Speech
Converts spoken language → written text (Google Live Transcribe, Otter.ai).
Assists hearing-impaired students with live captions during lectures.
Text-to-speech helps visually impaired learners by reading documents aloud.
Impact: Equal participation in classrooms, meetings, and public life.
2. Brainwave Interfaces (BCI – Brain-Computer Interfaces)
AI interprets neural signals to control devices (Neuralink, Emotiv).
Enables people with severe mobility impairments to:
Operate computers.
Communicate without speech or typing.
Control prosthetic limbs or wheelchairs.
Impact: Restores independence & communication for paralyzed patients.
3. Other AI Accessibility Innovations
Computer vision apps (Seeing AI, Be My Eyes) describe surroundings for blind users.
Gesture recognition helps those with speech impairments express themselves.
AI-powered assistive learning platforms for dyslexia & ADHD.
Case Example
Microsoft’s Seeing AI app narrates the world around blind users.
Facebook & Meta research on AI-driven brainwave decoding for speech restoration.
“Imagine you could design an AI tool for accessibility. Would
you focus on helping the visually impaired, hearing impaired, or
mobility impaired? What would your tool do?”
42.
Discussion – “CanAI make humans more human?”
• Amplifying Human Strengths
• AI takes over repetitive, mechanical tasks → humans focus on creativity, empathy, relationships,
ethics.
• Example: Doctors using AI diagnostics → more time for patient care & emotional support.
• Challenging Our Values
• AI forces us to define what is uniquely human: empathy, compassion, morality, intuition.
• Raises philosophical question: Are we outsourcing humanity to machines, or rediscovering it in
ourselves?
• Risks
• Over-reliance on AI could make us less human (detachment, loss of skills, reduced social interaction).
• Example: Students using AI tutors only → risk of missing peer learning & collaboration.
• Yes: AI helps us become “more human” by freeing time and enhancing empathy.
• No: AI risks making us “less human” by reducing authentic experiences.
• Maybe: Depends on how we design, govern, and use AI.
43.
Pros vs. Consof AI for humanity
Pros (Opportunities)
Efficiency & Productivity – Automates repetitive
tasks, freeing humans for creative/strategic work.
Healthcare Breakthroughs – Early disease
detection, personalized medicine, assistive
technologies.
Education for All – Adaptive learning platforms,
translation tools, inclusion for diverse learners.
Accessibility – Speech-to-text, vision assistance,
brain-computer interfaces for impaired individuals.
Climate & Sustainability – AI in weather prediction,
smart grids, energy efficiency, disaster response.
Economic Growth – New industries, AI-driven
innovation, job creation in emerging sectors.
Cons (Challenges/Risks)
Bias & Discrimination – Algorithms can amplify
unfairness (e.g., biased hiring, facial recognition gaps).
Job Displacement – Automation may replace routine jobs,
creating economic inequality.
Privacy & Surveillance – Misuse of personal data, mass
tracking by corporations/governments.
Black-box Decisions – Lack of explainability reduces trust
in critical domains like healthcare & justice.
Over-reliance on AI – Humans may lose essential skills,
social interactions, and decision-making ability.
Ethical & Existential Risks – Misuse in warfare,
deepfakes, or uncontrolled AI leading to unintended harm.
Do the benefits of AI outweigh the risks, or does the
future depend entirely on how responsibly we use it?
44.
Which of theseis NOT an AI-for-good initiative?
A) AI for Earth (Microsoft) – Using AI to tackle climate challenges
B) AI4ALL – Educating youth on ethical and inclusive AI
C) Google Flood Forecasting – AI models to predict floods in India
D) AI Weapons Program – Autonomous drones for combat
45.
Impact on LegalSystem AI & Law – Areas impacted
(contracts, IP, liability, evidence)
Contracts
AI can draft and review contracts faster.
Risk: biased or unfair clauses auto-generated.
Intellectual Property (IP)
Who owns AI-generated art/code? Creator, user, or AI company?
Copyright & patent laws are still unclear.
Liability
If a self-driving car crashes, who is responsible?
Manufacturer? Software developer? Owner?
Calls for new liability frameworks.
Evidence & Legal Proceedings
AI can analyze case files, predict judgments, and detect fraud.
Risk: biased AI tools may affect fairness in court rulings.
“Should AI ever be allowed to give legal advice or verdicts?” – this sparks a good classroom debate.
46.
Example – AI-generatedart & copyright disputes
Ownership Questions
If an AI creates art, who owns it?
The AI tool creator (e.g., OpenAI, Stability AI)?
The user who gave the prompt?
Or is it public domain since AI isn’t human?
Real-World Cases
Getty Images vs. Stability AI (2023)
Getty sued Stability AI, claiming its AI was trained on copyrighted photos without permission.
Théâtre D’opéra Spatial (2022, Colorado)
AI-generated artwork won a digital art contest → sparked debate about fairness & originality.
Ethical Dilemmas
Artists argue AI “copies” their styles without credit or payment.
Supporters say AI democratizes creativity, letting non-artists produce art.
“Should AI art be eligible for copyright protection like human art?”
47.
Liability challenge –Who is responsible if a self-driving
car crashes?
Possible Responsible Parties
Car Manufacturer (e.g., Tesla, Waymo) → if hardware fails.
AI Software Developer → if the algorithm makes a faulty decision.
Car Owner/Operator → if they misuse or override the system.
Third-Party (e.g., road authority, another driver) → if external conditions caused the crash.
Real-World Example
Uber Self-Driving Car Fatality (2018, Arizona)
First pedestrian death caused by an autonomous vehicle.
Legal question: Was Uber, the backup driver, or the AI system at fault?
Legal Challenges
Current traffic laws assume a human driver.
AI lacks legal personhood, so cannot be sued directly.
Calls for new laws & insurance models (e.g., strict liability for manufacturers).
“Should the AI itself be considered legally responsible, or should liability always fall on
humans/companies?”
48.
Global AI legalframeworks
(EU AI Act, US FTC guidelines, India draft bills)
1. European Union – AI Act (2024)
First comprehensive AI law globally.
Risk-based approach:
Unacceptable risk → banned (e.g., social scoring).
High-risk → strict requirements (healthcare, self-driving,
recruitment).
Low-risk → minimal regulation.
Focus: Transparency, accountability, human oversight.
2. United States – FTC Guidelines
No single federal AI law yet.
Federal Trade Commission (FTC) monitors AI under consumer protection
& anti-discrimination laws.
Key principles:
No deceptive/biased AI.
Companies must ensure fairness, transparency, explainability.
Sectoral rules: healthcare (FDA), finance, data privacy (state-level, e.g.,
California CCPA).
3. India – Draft AI Bills (2023–24)
India emphasizes “Responsible AI for All”.
Draft frameworks under Digital India & MeitY.
Key goals:
Promote innovation while ensuring ethics & accountability.
Address bias, deepfakes, privacy, cybersecurity.
Encourage startups & AI-in-governance.
Yet to finalize a binding law (currently policy-oriented).
“Should AI regulation prioritize innovation (like India/US) or strict control
(like EU)?”
49.
Case Study –Deepfakes & Elections
Real Failure/Success?
Deepfakes are AI-generated synthetic videos or audio that mimic real people.
They have been used to manipulate speeches, interviews, or campaign messages.
During elections worldwide (e.g., India, US, EU, African nations), some deepfakes have gone viral on social media, raising serious concerns about
misinformation.
Failures (Challenges):
Spread of misinformation: Voters misled by fake videos or audio clips.
Erosion of trust: Difficulty distinguishing real political communication from fake.
Legal & regulatory gaps: Current laws often lag behind AI technology.
Amplification via social media: Algorithms can push deepfakes to large audiences quickly.
Successes (Countermeasures):
Fact-checking & AI detection tools: Platforms like Microsoft’s Video Authenticator and Google’s deepfake detection research.
Awareness campaigns: NGOs and election commissions educating voters.
Legal responses: Countries drafting AI and digital misinformation laws.
Resilient voters: In some cases, people exposed to awareness campaigns were better able to spot and ignore deepfakes.
Are deepfakes more of a failure of technology governance or a success of human adaptability in elections?
50.
Activity - Debate(Is AI a legal entity?)
Proposition (YES – AI should be a legal entity):
AI systems act autonomously, making decisions without direct
human input.
Granting legal status (like corporations have) would help assign
responsibility.
Encourages innovation and investment by providing a clear
accountability framework.
Could allow AI to enter contracts, hold IP rights, and be sued if
harm occurs.
Precedents exist: corporations and even rivers/forests in some
countries have legal personhood.
Opposition (NO – AI should not be a legal entity):
AI lacks consciousness, intent, and moral agency –
essential for legal responsibility.
Responsibility should remain with developers, owners,
or users, not the machine.
Risk of companies hiding behind AI to escape liability.
Legal systems are designed for humans and
organizations, not tools.
Ethical danger: equating AI with humans may undermine
human dignity.
Middle Ground (Compromise Ideas):
Treat AI as a legal object, not subject, with specific liability frameworks.
Introduce “electronic personhood” in limited cases (like high-risk autonomous systems).
Require insurance models (like for self-driving cars) rather than giving AI personhood.
51.
Ethical vs. Legalgap (when laws lag behind technology)
Ethics → What should be done (morality, fairness, human rights).
Law → What must be done (enforceable rules by governments).
Technology often advances faster than the creation of laws → leaving a gap.
Examples of Ethical–Legal Gaps
• Facial Recognition
• Ethical concern: Privacy invasion, surveillance state.
• Legal status: Few countries regulate it; others allow unchecked use.
• AI Bias in Hiring
• Ethical concern: Discrimination against gender, caste, or race.
• Legal status: Anti-discrimination laws exist but don’t explicitly cover AI systems.
• Deepfakes
• Ethical concern: Misinformation, election manipulation.
• Legal status: Very few laws directly ban or regulate AI-generated media.
“A company uses AI to predict employees likely to resign. It fires them preemptively. Is it ethical? Is it legal?”
Discussion prompts:
Would current labor laws cover this?
Should laws be updated?
Who should take responsibility—government, companies, or society?
52.
Interactive Poll –“Should AI be given personhood?”
Personhood = legal recognition of rights and responsibilities (traditionally for humans, and sometimes corporations).
AI systems are becoming more autonomous → Should they get some form of personhood?
Yes → AI should have limited personhood (like corporations).
No → AI is just a tool, not a moral agent.
Maybe → Depends on the type of AI, its autonomy, and impact.
For AI Personhood
Responsibility in accidents (e.g., self-driving cars).
AI acting independently beyond human control.
Encourages accountability and governance.
Against AI Personhood
AI has no consciousness or emotions.
Shifts liability away from developers/owners.
Could create legal loopholes (like “AI scapegoats”).
Middle Ground
Assign “electronic personhood” only for high-risk autonomous AI.
If not personhood, then what accountability model works best?
53.
Quiz – Matchcountry with AI law/regulation
Countries
1. European Union (EU)
2. United States (USA)
3. China
4. India
5. Canada
AI Laws / Regulations
A. Released draft AI Ethics Guidelines (2021) through NITI Aayog
B. AI Act (first comprehensive AI law, risk-based framework)
C. Algorithmic Impact Assessment (AIA) for government AI systems
D. Issued Generative AI Guidelines (2023) and strict data governance rules
E. AI Bill of Rights (2022) focusing on fairness, privacy & accountability
1 → B
2 → E
3 → D
4 → A
54.
Impact on EnvironmentEnvironmental cost of AI
(energy use in training large models)
High Energy Demand
• Training state-of-the-art AI models (like GPT or BERT) can require thousands of powerful GPUs running for weeks or months.
• This consumes megawatt-hours (MWh) of electricity—comparable to the yearly energy use of hundreds of homes.
Carbon Footprint
• If the electricity comes from fossil fuels, training one large model can emit hundreds of tons of CO₂.
• Example: Training a single large NLP model once was estimated to emit the same CO₂ as five cars over their entire lifetimes.
Water Usage
• Data centers also use vast amounts of water for cooling. A single AI query may consume as much water as filling a small bottle,
multiplied by billions of queries.
Inefficiency
• Models are often trained multiple times for research, fine-tuning, or commercial competition, further increasing the footprint.
Balancing Progress with Sustainability
• Shift towards green AI (developing energy-efficient models).
• Use renewable energy–powered data centers.
• Encourage model sharing and reuse instead of retraining from scratch.
• Invest in efficient hardware (TPUs, neuromorphic chips) and optimized algorithms.
“Should AI companies be required to disclose the carbon footprint of their models, just like food companies list calories?”
55.
Example – Carbonfootprint of GPT-3 training
Here’s a compact, real-world example people often cite for GPT-3 (175B) training:
Estimated training energy: ~1,287 MWh.
Using a typical 2020 U.S. grid intensity (~0.39 tCO₂e per MWh), that’s roughly ~500 tCO₂e for one end-to-end training run.
Back-of-the-envelope: 1,287 MWh × 0.39 tCO₂e/MWh ≈ 502 tCO₂e.
These are estimates; the actual footprint depends on hardware, training efficiency, location/energy mix, and how many runs/ablation experiments were done.
A widely cited technical analysis from Google/academia discusses how to estimate and reduce ML training emissions (e.g., efficient chips, datacenter PUE, clean energy siting, algorithmic
efficiency).
Public write-ups summarizing GPT-3’s training footprint report numbers in this ballpark (≈1.3 GWh energy; ≈0.5 kt CO₂e), derived from the above methodology.
What moves the number most
Energy source (renewables vs. fossil-heavy grid)
Hardware & datacenter efficiency (GPU/TPU choice, PUE)
Training recipe (batching, parallelism, early-stopping, parameter-efficient tricks)
How many trials (hyperparameter sweeps can dominate total emissions)
How to lower it
Train in regions/datacenters with low-carbon power and good PUE
Use more efficient accelerators/algorithms and mixed-precision
Reuse pre-trained models (fine-tuning/LoRA) instead of full retrains
Report energy/emissions so results can be audited and improved over time.
56.
Positive AI forsustainability (climate modeling, smart
grids, precision farming)
1. Climate Modeling & Prediction
AI enhances climate models by processing satellite data, weather records, and ocean data to predict extreme events like cyclones, floods, or heatwaves with higher accuracy.
Example: AI-driven climate simulations help policymakers design better mitigation and adaptation strategies.
2. Smart Grids & Energy Efficiency
AI optimizes electricity demand and supply, balancing renewable sources (solar, wind) with conventional power.
Smart grid algorithms predict peak usage, reduce wastage, and stabilize grids against outages.
Example: Google DeepMind reduced data center cooling energy use by 40% through AI optimization.
3. Precision Farming & Sustainable Agriculture
AI uses drones, sensors, and computer vision to monitor crop health, soil quality, and water needs.
Farmers apply fertilizers/pesticides only where needed, reducing chemical runoff and environmental harm.
Predictive analytics improves yield forecasting and reduces food waste.
4. Biodiversity & Conservation
AI-enabled acoustic monitoring detects endangered species in forests/oceans.
Satellite imagery + AI identify deforestation patterns and illegal mining/poaching in real time.
5. Circular Economy & Waste Management
AI helps in automated waste sorting, recycling, and detecting inefficiencies in manufacturing.
Optimizes supply chains to reduce carbon footprint.
While AI has an environmental cost (energy-intensive training), its applications in sustainability far outweigh the negatives if developed and deployed responsibly.
57.
AI in wastemanagement & recycling optimization
1. Smart Waste Collection
AI predicts waste generation patterns using IoT sensors and historical data.
Optimizes collection routes, reducing fuel consumption and emissions.
2. Automated Waste Sorting
Computer vision + robotics sort plastics, metals, glass, and paper from mixed waste streams.
Increases recycling efficiency and reduces contamination in recyclables.
Example: AMP Robotics uses AI-powered robots to recognize and sort materials with 99% accuracy.
3. Recycling Optimization
AI identifies market demand for recyclable materials, improving recycling profitability.
Tracks the life cycle of materials (digital product passports) to enable circular economy models.
4. Waste-to-Energy Conversion
AI models optimize biogas plants and incinerators for maximum energy recovery from waste.
Helps reduce landfill dependency.
5. Policy & Citizen Engagement
AI-driven apps guide citizens on proper disposal (e.g., compost vs recyclable).
City planners use AI dashboards to monitor real-time waste flows and plan interventions.
Reduces landfill waste and greenhouse gas emissions.
Supports circular economy and sustainable urban living.
58.
Students suggest AIideas for reducing campus energy use
Step 1 – Introduction Briefly explain how AI helps in energy optimization
(smart grids, demand forecasting, predictive maintenance).
Step 2 – Group Brainstorm Divide students into small groups. Ask them to
propose one AI-powered solution to cut energy waste on campus.
Sample idea prompts for students:
How can AI manage classroom lights and AC efficiently?
Can AI predict peak energy demand in labs or hostels?
How might AI help with solar energy optimization on campus rooftops?
Can AI monitor and reduce computer lab idle time?
Step 3 – Presentation Each group presents their idea in 2 minutes. Encourage
creativity but also feasibility.
Step 4 – Class Vote “Which AI idea would save the most energy if implemented
on our campus?”
59.
Sample Student Ideas
1.Smart Lighting System – AI + motion sensors to automatically turn off lights when no one
is in a classroom.
2. AI-driven HVAC Control – Adjust AC temperature based on class schedule and weather
forecast.
3. Energy Dashboard – AI predicts energy peaks in hostels and suggests load balancing.
4. AI for Solar Optimization – Forecast energy storage and usage from campus solar panels.
5. Computer Lab Monitoring – AI shuts down idle systems after a set time.
Outcome: Students learn how AI can directly improve sustainability in their own
environment.
60.
“Does AI domore harm or good for environment?”
Harm side:
Training large AI models (like GPT-3) consumes massive energy → large carbon footprint.
Data centers require cooling systems, adding to electricity usage.
Short hardware lifecycles → more e-waste.
Good side:
AI supports climate modeling & predictions (e.g., forecasting floods, wildfires).
Enables smart grids & renewable energy optimization.
Helps in precision farming, reducing water/fertilizer waste.
Improves waste management & recycling efficiency.
Divide the class into 2 groups – one argues AI is harmful, the other argues AI is beneficial.
“Do you believe AI will ultimately do more harm or more good for the environment?”
Expected Student Outcomes:
Understand the trade-offs of AI development.
Critically analyze real-world examples (GPT-3’s carbon footprint vs. Google Flood Forecasting in India).
Learn to balance ethical, technical, and environmental perspectives.
61.
Case Study –AI in monitoring deforestation
Real Failure/Success?
The Scenario
Deforestation is a major driver of climate change and biodiversity loss. Governments and NGOs need faster, more accurate tools than traditional satellite imagery + human analysis to detect
illegal logging and forest destruction.
The AI Solution
Organizations like Global Forest Watch (WRI), Rainforest Connection, and Google Earth Engine use AI + satellite imagery + acoustic sensors.
AI analyzes real-time data from satellites and IoT devices (solar-powered forest sensors).
ML algorithms detect chainsaw sounds, unusual forest clearing, and compare vegetation cover across time.
Alerts are sent to rangers and local authorities within hours, instead of months.
Success:
Global Forest Watch reports near real-time monitoring in countries like Brazil and Indonesia.
Rainforest Connection’s AI-powered acoustic monitoring system helped prevent illegal logging in parts of the Amazon.
Significant reduction in response time for authorities.
Limitations/Failures:
False positives from AI (e.g., detecting farming as logging).
Limited coverage in cloud-heavy tropical regions (satellite blockage).
Dependence on internet/data infrastructure in remote areas.
Verdict: Mostly a Success
AI has improved deforestation monitoring,
making enforcement faster and more
effective, though challenges remain in
scalability, accuracy, and infrastructure
dependency.
“Would you trust AI alerts alone to act
against deforestation, or should humans
always verify?”
62.
“Green AI practices”brainstorm
What is Green AI?
AI approaches that reduce energy consumption, carbon footprint, and resource waste while maintaining performance.
Best Practices for Green AI
• Efficient Model Design
• Use smaller, optimized models (e.g., DistilBERT, MobileNet) instead of massive architectures.
• Apply model pruning & quantization to reduce parameters.
• Energy-Efficient Training
• Train models with fewer epochs using early stopping.
• Use transfer learning instead of training from scratch.
• Optimize batch sizes and hyperparameters.
• Hardware & Infrastructure
• Run on energy-efficient GPUs/TPUs.
• Prefer data centers powered by renewable energy (Google, Microsoft, Amazon Green initiatives).
• Data Efficiency
• Use active learning to select the most informative samples.
• Reduce redundancy in datasets.
• Lifecycle Considerations
• Encourage model reuse (sharing pre-trained models).
• Regularly audit carbon emissions of AI projects.
Impact
Reduces carbon footprint of large-scale AI.
Supports sustainable AI innovation.
Aligns with corporate social responsibility (CSR) and UN Sustainable Development Goals (SDGs).
“How can we apply Green AI practices if our college trains AI models for research?”
63.
Wrap-up: Ethics, Society,Law & Environment → Holistic
AI Responsibility
1. Ethics – Doing the right thing
Fairness → Avoid bias in datasets & algorithms
Accountability → Clear responsibility for AI decisions
Transparency → Explainable AI (XAI) for trust
Privacy → Protect personal data
2. Society – Human-centered AI
Accessibility → AI for differently-abled (speech-to-text, brainwave interfaces)
Inclusion → Language translation, adaptive learning
Employment → Reskilling workforce, balancing automation & jobs
Trust → Building confidence in AI use (healthcare, finance, governance)
3. Law – Regulation & governance
Liability → Who is responsible in AI errors (e.g., self-driving cars)?
Intellectual Property → AI-generated art & copyright disputes
Regulations → EU AI Act, US FTC guidelines, India’s NITI Aayog
strategy
Legal Personhood Debate → Should AI have rights?
4. Environment – Sustainability in AI
Cost → High energy use in large AI models (e.g., GPT-3 training
footprint)
Green AI → Efficient models, renewable-powered data centers,
recycling hardware
Positive AI → Climate modeling, smart grids, precision farming, waste
management
“AI for Good” → UN projects, Microsoft AI for Earth
Holistic AI Responsibility =
Ethical principles + Social impact + Legal accountability + Environmental sustainability
👉 Only when all four dimensions are addressed, AI can truly serve humanity & the planet.
End session with a mind-map activity – ask students to place any AI app (e.g., ChatGPT, AI in healthcare, self-driving car) in
the center and map out its Ethical, Social, Legal, and Environmental impacts.