KEMBAR78
AI Module5.2 | PDF | Artificial Intelligence | Intelligence (AI) & Semantics
0% found this document useful (0 votes)
10 views8 pages

AI Module5.2

The document discusses the opportunities and risks of AI for society, emphasizing the need for ethical frameworks to guide AI development and implementation. It outlines four key opportunities AI presents, along with associated risks, and proposes a unified framework of five core ethical principles: beneficence, non-maleficence, autonomy, justice, and explicability. Additionally, it highlights the importance of embedding these principles in AI practices and offers recommendations for creating a socially responsible AI landscape.

Uploaded by

arjunmax1994
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views8 pages

AI Module5.2

The document discusses the opportunities and risks of AI for society, emphasizing the need for ethical frameworks to guide AI development and implementation. It outlines four key opportunities AI presents, along with associated risks, and proposes a unified framework of five core ethical principles: beneficence, non-maleficence, autonomy, justice, and explicability. Additionally, it highlights the importance of embedding these principles in AI practices and offers recommendations for creating a socially responsible AI landscape.

Uploaded by

arjunmax1994
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

|| Jai Sri Gurudev ||

Sri Adichunchanagiri Shikshana Trust (R)


SJB INSTITUTE OF TECHNOLOGY
An Autonomous Institution Affiliated under VTU, Belagavi

Study Material

Course Name: Artificial Intelligence


Course Code: 23AII403

Prepared By:
Faculty Name: Mrs. Shashikala AB
Designation: Assistant Professor
Semester: IV

Department of Artificial Intelligence and Machine Learning

Academic Year: EVEN Sem /2024--25

Dept. of AIML, SJBIT, Bengaluru


Opportunities and Risks of AI for Society AI's impact on society is no longer in
question; the debate now centers on its positive or negative nature, and for whom,
where, when, and how this impact will be felt. The chapter introduces four chief
opportunities AI offers for society, each addressing fundamental aspects of human
dignity and flourishing, along with their corresponding risks and opportunity costs:

1. Who We Can Become: Enabling Human Self-Realisation, Without


Devaluing Human Abilities
o Opportunity: AI can enable self-realization by automating mundane
tasks, freeing up human time for cultural, intellectual, social, and more
rewarding work, potentially leading to "more human life spent more
intelligently".
o Risk: The rapid devaluation of old skills can cause quick disruption in
the job market and nature of employment, leading to adverse effects on
individual identity, self-esteem, and social role. Societally, "deskilling"
in sensitive domains like healthcare diagnosis could create dangerous
vulnerabilities.
o Solution Idea: Fostering AI in support of new skills while anticipating
and mitigating its impact on old ones requires close study and
potentially radical ideas like "universal basic income" to ensure a fair
transition.
2. What We Can Do: Enhancing Human Agency, Without Removing
Human Responsibility
o Opportunity: AI provides a growing reservoir of "smart agency" that
can hugely enhance human agency, allowing people to "do more,
better, and faster". This is seen as "Augmented Intelligence".
o Risk: The absence of responsibility, either due to an inadequate socio-
political framework or a "black box" mentality where AI decision-
making systems are seen as beyond human understanding and control.
This applies to high-profile cases (e.g., autonomous vehicle deaths)
and commonplace uses (e.g., parole decisions).
o Insight: The relationship between human agency and delegated agency
to autonomous systems is not zero-sum; AI can improve and multiply
human agency possibilities through "facilitating frameworks" designed
for morally good outcomes.
3. What We Can Achieve: Increasing Societal Capabilities, Without
Reducing Human Control
o
Opportunity: AI offers countless possibilities for improving societal
capabilities, from preventing diseases to optimizing logistics, enabling
"better coordination, and hence more ambitious goals".
o Risk: Over-reliance on AI may lead to delegating important tasks and
decisions that should remain subject to human supervision and choice,
potentially reducing human ability to monitor these systems or redress
errors ("post loop").
o Challenge: Striking a balance between pursuing ambitious
opportunities and ensuring humans remain in control of major
developments and their effects.
4. How We Can Interact: Fostering Societal Cohesion, Without Eroding
Human Self-Determination
o Opportunity: AI can support societal cohesion, for example, by
underpinning algorithmic systems that cultivate socially preferable
behaviors, through "self-nudging" that preserves autonomy.
o Risk: AI systems may erode human self-determination by leading to
unplanned and unwelcome changes in human behaviors to
accommodate automation routines, or through predictive power and
relentless nudging, even if unintentional, undermining human dignity
and flourishing.

IV. The Dual Advantage of an Ethical Approach to AI

 Adopting an ethical approach to AI provides a "dual advantage" for


organizations.
1. Leveraging Social Value: Ethics enables organizations to identify and
capitalize on new opportunities that are socially acceptable or
preferable.
2. Preventing Costly Mistakes: Ethics allows organizations to anticipate
and avoid or minimize expensive errors or socially unacceptable
courses of action, even if legally permissible. This also reduces
opportunity costs from choices unmade due to fear of mistakes.
 This dual advantage relies on public trust and clear responsibilities. Public
acceptance of AI will occur only if benefits are seen as meaningful and risks
as preventable, minimizable, or redressable. This requires public engagement,
openness about AI operation, and accessible mechanisms for regulation and
redress.

V. A Unified Framework of Principles for AI in Society AI4People, rather than


generating new principles, synthesizes existing ones from reputable, multi-stakeholder
organizations. A comparative analysis of six high-profile initiatives (Asilomar AI
Principles, Montreal Declaration, IEEE, EGE, UK House of Lords AI Committee,
Partnership on AI) found a high degree of overlap among their 47 principles. The
chapter argues for an overarching framework of five core principles for ethical AI,
adapting four from bioethics and adding a new one:

1. Beneficence: AI should be beneficial to, and respectful of, people and the
natural world. This includes promoting well-being, preserving dignity, and
sustaining the planet, as expressed in various documents through terms like
"well-being," "common good," "human dignity," and "sustainability".
2. Non-maleficence: AI should be robust and secure, preventing harm. This
principle, distinct from beneficence, cautions against negative consequences of
AI misuse or overuse, with particular concern for infringements on personal
privacy. Other concerns include avoiding an AI arms race, recursive self-
improvement of AI, and ensuring AI operates "within secure constraints".
There's an underlying question of whether the harm is from developers or the
technology itself.
3. Autonomy: AI should respect human values and human agency, allowing
users to make informed autonomous decisions. In the context of AI, affirming
autonomy means balancing the decision-making power humans retain with
what is delegated to machines. The autonomy of machines should be restricted
and made reversible to protect human autonomy.
4. Justice: AI should be fair, addressing discrimination and ensuring equitable
distribution of benefits and risks. Interpretations of justice across documents
include using AI to correct past wrongs (e.g., eliminating discrimination),
ensuring shared benefits, and preventing new harms to existing social
structures. There's also ambiguity regarding AI's role—is it the patient, doctor,
or both?
5. Explicability: AI should be explainable, accountable, and understandable.
This principle, identified as a crucial new addition, incorporates both:
o Intelligibility: The epistemological sense of "how does it work?".
o Accountability: The ethical sense of "who is responsible for the way it
works?". Explicability enables the other four principles: understanding
AI's good/harm (beneficence/non-maleficence), informing decisions
about delegating agency (autonomy), and identifying who to hold
accountable for negative outcomes (justice).

VI. Recommendations for a Good AI Society (Action Points) The ethical principles
should be embedded in the default practices of AI, particularly in ways that decrease
inequality, further social empowerment, respect human autonomy, and increase
shared benefits. Explicability is highlighted as crucial for building public trust and
understanding. A multi-stakeholder approach is essential for ensuring AI serves
societal needs, fostering collaboration between developers, users, and rule-makers.
The recommendations are designed to be dynamic and require continuous effort.

The 20 recommendations fall into four categories:

 Assessment:
o Assess the capacity of existing institutions (e.g., civil courts) to redress
AI-caused harms, evaluating foundations for liability from the design
stage.
o Assess which tasks and decision-making functionalities should not be
delegated to AI systems, using participatory mechanisms to align with
societal values and public opinion.
o Assess whether current regulations are sufficiently grounded in ethics
to keep pace with technological developments.
 Development:
o Develop a framework to enhance the explicability of AI systems
making socially significant decisions, allowing individuals to obtain
factual, direct, and clear explanations, especially for unwanted
consequences. This may require industry-specific frameworks.
o Develop a redress process or mechanism to remedy or compensate
for AI-caused wrongs or grievances, involving clear accountability
allocation to humans and/or organizations. Examples include an "AI
ombudsperson," a complaint registration process, and liability
insurance mechanisms.
o Develop agreed-upon metrics for AI trustworthiness to enable user-
driven benchmarking and foster competitiveness around safer, more
socially beneficial AI. This could lead to a broader system of
certification.
 Incentivisation:
o Financially incentivize the development and use of socially preferable
and environmentally friendly AI technologies at the EU level.
o Financially incentivize sustained, increased, and coherent European
research in AI for social good.
o Financially incentivize cross-disciplinary and cross-sectoral
cooperation and debate on technology, social issues, legal studies, and
ethics.
 Support:
o Support the capacity of corporate boards of directors to take
responsibility for the ethical implications of AI technologies,
potentially through improved training or the development of ethics
committees/review boards.
o Support the creation of educational curricula and public awareness
activities around AI's societal, legal, and ethical impact. This includes
curricula for schools, qualification programs in businesses, and
inclusion of ethics and human rights in AI/data science degrees.

VII. Conclusion The framework presented can serve as an architecture for


developing laws, rules, technical standards, and best practices for ethical AI in
specific sectors and jurisdictions. It plays both an enabling role (e.g., for UN
Sustainable Development Goals) and a constraining one (e.g., regulating AI in
cyberwarfare). The framework was adopted by AI4People and has influenced
guidelines from the European Commission's High-Level Expert Group and the
OECD, reaching 42 countries. Charting a socially preferable course for AI will
depend on well-crafted regulation, common standards, and the consistent use of this
ethical framework.

An Initial Review
of Publicly Available AI Ethics Tools,
Methods and Research to Translate
Principles into Practices

I. Purpose and Core Argument

 This chapter aims to contribute to closing the gap between the "what" of AI
ethics (principles) and the "how" (practices), specifically focusing on
Machine Learning (ML).
 It highlights that while awareness of potential ethical issues in AI is increasing
rapidly, the AI community's ability to actually mitigate associated risks is
still in its infancy.
 The goal is to help practically-minded developers apply ethics at each stage of
the ML development pipeline and to identify areas requiring further research.

II. Methodology The research involved two main tasks to create an "applied ethical
AI typology":

1. Typology Design:
o It combines ethical principles with the stages of algorithmic
development. This encourages ML developers to regularly consider
ethical principles during decision-making.
o High-level ethical principles (beneficence, non-maleficence,
autonomy, justice, and explicability) were translated into tangible
system requirements to create a "minimum-viable-ethical-
(ML)product" (MVEP).
2. Identification of Tools and Methods:
o A literature review was conducted across five databases (Scopus,
arXiv, PhilPapers, Google Search) from October 2018 to May/July
2019.
o The search aimed for tools and methods that provided practical or
theoretical contributions on "how to develop an ethical algorithmic
system".
o Over 1000 initial results were screened for relevance, actionability by
ML developers, and generalizability, resulting in 425 reviewed
sources.

III. Central Ethical Principles in ML An emerging consensus identifies five core


themes for ethical ML, found in over 70 documents by various stakeholders (industry,
government, academia):

 Beneficence: AI should be beneficial to, and respectful of, people and the
environment.
 Non-maleficence: AI should be robust and secure, preventing harm.
 Autonomy: AI should respect human values and human agency, allowing
users to make informed autonomous decisions.
 Justice: AI should be fair, addressing issues of discrimination and equitable
distribution of benefits and risks.
 Explicability: AI should be explainable, accountable, and understandable.
This includes both technical processes and related human decisions.

IV. Stages of Algorithmic Development The typology matches ethical principles


with the following stages of the ML development pipeline:

 Business and Use-Case Development: Problem definition and AI proposal.


 Design Phase: Business case translated into design requirements.
 Training and Test Data Procurement: Initial datasets obtained.
 Building: AI application construction.
 Testing: System validation.
 Deployment: When the AI system goes live.
 Monitoring: Performance assessment of the system.

V. Framing the Results

 The typology is not a complete checklist nor does it imply definitive rules.
Instead, it aims to synthesize available tools to encourage the progression of
ethical AI from principles to practice.
 The tools and methodologies are seen as a pragmatic version of Habermas's
discourse ethics, acting as a medium for identifying, checking, creating, and
re-examining ideas through rational deliberation among stakeholders with
differing views.

VI. Initial Findings/Observations The review highlighted three key observations


about the availability and nature of tools and methods for applying ethics in ML:

1. Overreliance on 'Explicability':
o Tools and methods are unevenly distributed, with the most noticeable
skew towards post hoc explanations for explicability, particularly
during the testing phase.
o This suggests a potential risk: if an AI system is only explainable after
deployment, it might be too late to address fundamental ethical issues.
2. Focus on the Individual over the Collective:
o Few tools adequately assess the impact of data processing on an
individual, and even fewer address the impact on society as a whole.
o The "deployment" column in the typology is notably sparse, indicating
a lack of tools for pro-ethically designed human-computer interaction
at the individual level or for networks of ML systems at the group
level.
3. Lack of Usability:
o The vast majority of tools and methods are not actionable; they
offer little guidance on practical implementation.
o Even open-source code libraries often have limited documentation and
require high skill levels to use, hindering their adoption by ML
developers.
o There's a concern that these tools might simply help those "with the
loudest voices embed and protect their values" rather than fostering
genuine ethical consideration.
VII. Way Forward The ML research community should collaborate with a specific
focus on:

1. Developing a common language.


2. Creating tools that ensure equal and meaningful participation in algorithmic
design for individuals, groups, and societies at all development stages.
3. Addressing the current imbalance in available tools, especially for principles
other than explicability.
4. Making tools usable and actionable for developers with varied skill levels.
5. Evaluating and creating pro-ethical business models and incentive
structures that balance costs and rewards for ethical AI across society.

VIII. Limitations

 The broad research question led to an overwhelming amount of literature,


meaning some tools/methods were likely missed, especially proprietary ones
from private companies.
 The theoretical distinction between development stages may not reflect real-
world practice, potentially limiting tool usability if developers perceive it's
"too late" for certain ethical considerations.
 The study acknowledges a lack of clarity on how identified tools improve the
governability of algorithmic systems, though they can facilitate co-regulation
and compliance evidence.

You might also like