AI Project Cycle
AIProject Cycle provides us with an appropriate framework which can lead us
towards the goal of our AI Project. It is the cyclical process followed to complete an AI
project. The AI Project Cycle mainly has 6 stages:
Ethical Frameworks forAI
Frameworks
Frameworks are a set of steps that help us in solving problems. It provides a step-by-
step guide for solving problems in an organized manner.
Ethical Frameworks
Ethics are a set of values or morals which help us separate right from wrong.
Frameworks are step-by-step guidance on solving problems.
"Ethical frameworks help guide decision-making to prevent unintended harm and
ensure responsible choices.“
Why do we need Ethical Frameworks for AI?
Ethical frameworks guide AI in making morally responsible decisions. By
incorporating these frameworks into AI development, we can prevent unintended
consequences before they occur.
https://www.my-goodness.net/
9.
Ethical frameworks forAI can be categorized into two main types:
Sector-based and Value based frameworks.
Sector-based Frameworks: These are frameworks tailored to specific sectors or industries. In the
context of AI, one common sector-based framework is Bioethics, which focuses on ethical
considerations in healthcare.
This framework helps guide the ethical use of AI in areas like healthcare, finance, education, and law
enforcement, where the consequences of AI’s actions can be very different depending on the industry.
Examples of Sector-Based Ethical Frameworks:
Healthcare (Medical AI)
Ethical Concerns: Privacy, informed consent, and fairness.
Example: An AI that analyzes medical images must be accurate and free from bias. It should also
ensure patient confidentiality.
10.
Finance (AI inBanking/Loans)
•Ethical Concerns: Fairness, transparency, and accountability.
•Example: AI used to approve loans must be transparent in its decision-making process and ensure fairness across all
demographics.
Education (AI in Learning Tools)
Ethical Concerns: Access, equity, and impact on learning outcomes.
Example: AI-driven educational tools should be accessible to all students, regardless of their background, and should
promote inclusive learning.
Law Enforcement (AI in Surveillance)
Ethical Concerns: Privacy, bias, and accountability.
Example: AI used in surveillance (like facial recognition) must protect citizens' privacy, not discriminate, and have clear
accountability for misuse.
Autonomous Vehicles (AI in Transportation)
Ethical Concerns: Safety, transparency, and accountability.
Example: Self-driving cars need to be designed in a way that prioritizes safety, minimizes harm, and is transparent
about decision-making processes.
11.
Value-based Frameworks: Value-basedframeworks focus on fundamental ethical principles and values
guiding decision making. They can be further classified into three categories:
i. Rights-based: Prioritizes the protection of human rights and dignity, valuing human life over other
considerations.
This framework is based on the idea that every person has fundamental rights — like the right to privacy, freedom,
dignity, and equality — and those rights must always be respected, no matter the outcome.
Examples of Rights AI Should Respect:
Right to Privacy
– AI should not collect or share personal data without permission.
Example: A health app powered by AI must ask before using or sharing medical info.
Right to Fair Treatment
– AI should not discriminate based on race, gender, or background.
Example: A hiring AI must treat all candidates equally, not favor certain groups.
Right to Freedom of Choice
– People should have the option to say yes or no to AI systems.
Example: A patient should be able to refuse an AI-recommended treatment.
Right to Safety and Security
– AI should not put people in danger.
Example: A self-driving car must be tested thoroughly to ensure it won’t cause harm.
12.
Utility-based: Evaluates actionsbased on the principle of maximizing utility or overall good, aiming to
achieve outcomes that offer the greatest benefit and minimize harm.
This framework is all about outcomes. It asks: “Which action will create the most good (or least harm) for the
most people?” It's focused on results, not rules or intentions.
Key Principles:
Maximize Good Outcomes
– AI should help as many people as possible.
Example: An AI healthcare system that prioritizes treatments that save the most lives.
Minimize Harm
– If someone might be harmed, the AI should choose the path with the least harm overall.
Example: A self-driving car deciding between two dangerous outcomes — it picks the one that causes less harm.
Cost vs. Benefit
– AI decisions should consider what's worth it in terms of results.
Example: An AI that helps reduce food waste in a city might be considered ethical if it feeds more people and cuts
pollution.
13.
iii. Virtue-based: Thisframework focuses on the character and intentions of the individuals involved in
decision-making. It asks whether the actions of individuals or organizations align with virtuous
principles such as honesty, compassion, and integrity.
Why is it important?
Even the smartest AI can do harm if it’s built without good intentions.
Virtue ethics reminds us:💬 “AI should be built by good people trying to do good things.”
A virtue-based ethical framework focuses on the character and intentions behind actions — not just
the rules or the results.
It asks:
“Is this action being done with good character traits like honesty, kindness, and wisdom?”
14.
Core Virtues inAI Ethics:
Honesty – AI systems should be transparent and not deceive users.
Example: A chatbot shouldn’t pretend to be a human if it's not.
Compassion – AI should help and not harm people.
Example: Healthcare AI should aim to improve well-being, not just cut costs.
Responsibility – Developers must take responsibility for their AI’s actions.
Example: If an AI causes harm, creators shouldn’t blame “the machine.”
Justice – Fairness and respect for all users.
Example: AI used in courts should treat everyone equally.
Wisdom – Making thoughtful, well-informed choices.
Example: Thinking about long-term consequences of using AI in schools or law enforcement.
16.
1. Respect forAutonomy
Let people make their own choices about their health and life.
Example: If a patient doesn't want a treatment, the doctor should
respect that.
Respect the patient’s choices and informed consent.
In AI: Patients should know
➡️ when and how AI is being used in
their treatment and have the right to accept or reject it.
Example: If an AI suggests surgery, the patient must be fully
informed and allowed to choose.
17.
2. Do NotHarm (Non-Maleficence)
Never do anything that could hurt someone.
Example: A doctor should avoid treatments that might cause
more harm than good.
Do no harm.
In AI: AI systems must be tested to
➡️ avoid errors,
misdiagnoses, or harmful outcomes.
Example: A buggy AI system that gives the wrong medication
dose could hurt a patient—this violates non-maleficence.
18.
3. Ensure MaximumBenefit (Beneficence)
Always try to do what is best for the person’s health and well-being.
Example: Give the best possible care to help the patient recover.
Do good; act in the patient’s best interest.
In AI: AI must be used to
➡️ improve health outcomes—like making
faster diagnoses, personalizing treatments, or reducing errors.
Example: Using AI to detect cancer early and save lives.
19.
4. Give Justice(Fairness)
Treat everyone equally and fairly.
Example: All patients should get the same quality of care, no matter
who they are.
Ensure fairness and equality.
In AI: AI systems must
➡️ not discriminate based on race, gender,
age, or location. Access to AI in healthcare should be equally
available to all.
Example: AI diagnosis tools should work equally well for all types of
people, not just those in wealthy countries.