Chapter two
Literature review
2.0. Introduction
In this section, the study will cover pervious researches that were
conducted on AI and its relationship with human beings, summarizing what
have been found regarding how human beings perceive AI. And also, we will
define some key concepts that are pertaining to the topic. Moreover, the
study will introduce the theoretical frameworks that are used to analyse and
understand the discourse of tech companies.
2.1. Overview of previous researches:
Due to the massive growth of tech companies and the power their AI
systems have gained in our world, many questions have arisen regarding the
trustworthiness of these AI systems. This has driven extensive research
aimed at deeply analyzing and understanding the issue of trust between AI
models and human beings.
A study, "Trust in AI: Progress, Challenges, and Future Directions,"
conducted in 2024 ( Afroogh, S., Akbari, A., Malone, E. et al. Trust in AI:
progress, challenges, and future directions. Humanit Soc Sci Commun 11,
1568 (2024). https://doi.org/10.1057/s41599-024-04044-8), examined the
role of explainable AI in enhancing trust between AI and users. The
researchers found that, unlike the traditional way of building trust that is
based on benevolence, integrity, and ability, AI trust is shaped by external
factors such as data quality, model reliability, and evolving system
functionality. The article highlights that trust and trustworthiness are distinct
constructs; that is to say, an AI model can be trusted without being inherently
trustworthy. For instance, if the interface of an AI model is highly appealing,
trust between the user and the AI significantly increases, even if the AI
performs poorly. Moreover, the study emphasizes that users trust AI models
based on various factors such as transparency, decision explanations, and
interface aesthetics. However, the article sheds light on the fact that these
factors can be manipulated, leading to over trust in AI. To create the
appropriate bridge between trust and trustworthiness, the researchers
identified five principles that need to be fulfilled: beneficence, non-
maleficence, autonomy, justice, and explicability.
Furthermore, the study discussed the impact of trust and distrust on AI
technology acceptance in different domains. It addresses the use of AI in
various fields, including healthcare and the financial sector. In healthcare,
the study highlights concerns such as the low number of randomized clinical
trials to test AI performance, the lack of transparency in AI information flows,
the risk of inequity and discrimination introduced by algorithmic biases, and
insufficient regulatory clarity. Additionally, limited public literacy about AI
negatively affects trust in healthcare applications.
Regarding financial investment, the researchers found that trust in
algorithmic investment advice remains relatively low. Many investors
demonstrate a preference for human predictions over AI-generated ones due
to algorithm aversion, indicating that people tend to be more forgiving of
human errors than algorithmic mistakes. Based on this, the article concludes
that humans are still preferred by customers in most cases due to the
absence of empathy in AI.
The absence of empathy (defined as the ability to understand and
respond to others' emotions in a non-judgmental way) has been addressed in
the study for its role in building trust between users and AI. Relying on prior
research, the authors found that AI systems capable of accurately
interpreting human emotions and responding with supportive, compassionate
actions are more likely to be trusted. Additionally, they found that trust is
significantly influenced by AI’s ability to align with users’ needs and
expectations, interpret social cues, and adapt to users’ cultural backgrounds,
all of which contribute to building strong trust. This alignment between AI and
users in terms of culture, values, and preferences, according to the article,
leads users to perceive AI as more reliable and trustworthy. The study thus
highlights the importance of integrating emotional and social intelligence into
AI.
In The Cambridge Handbook of Artificial Intelligence (Wallach, W., &
Allen, C. (2014). The ethics of artificial intelligence. In K. Frankish & W. M.
Ramsey (Eds.), The Cambridge Handbook of Artificial Intelligence (pp. 316–
334). Cambridge University Press.
https://doi.org/10.1017/CBO9781139046855.020)
, Wallach and Allen addressee the ethical issues regarding developing , the
deploying, and designing AI systems. The authors explore the ethical and
societal implications of the ways artificial intelligence is used in human
decision making, especially in more social domains. Wallach and Allen shot
light on the unintentionality of racial discrimination of AI, illustrating that how
AI systems can unintentionally make biased decisions. The authors use a
compelling example of a bank that uses a machine learning algorithm to
approve mortgage applications; this results in rejecting black applicants
despite the AI system is blinded to race. This means that AI systems can
indirectly infer sensitive traits by depending on different variables such as
name, location, or income, directing it to make biased decisions.
Moreover, the authors cover one of the most profound and increasingly
relevant matter, which is the possibility of AI possessing moral status.
According to Francis Kamm, moral status refers to an entity that deserves
moral consideration for its own sake, not for its contributions or usefulness in
the world. This is expressed as follows:
“Because X counts morally in its own right, it is permissible/impermissible
to do things to it for its own sake.” (Kamm 2007:
chapter 7; paraphrase) The authors introduce the two main criteria by which
an entity can have a moral status, which are the following:
Sentience refers to the capacity to feel, suffer , and to experience pain
Sapience is defined as the ability to reason, have self-awareness, and to
have intelligence.
Based on these criteria, if an entity has sentience, it should not be harmed
without a strong and necessary cause, whereas if it has sapience, it should
have full mortal status. This leads to how AI should be treated as future AI
systems might be capable of experiencing pain or demonstrating self-
awareness. This topic is anchored in two principles that are used by the
authors to elaborate on the topic. The principle of Substrate Non-
Discrimination states that if two beings accentuate the same functionality
and conscious experiences, but are built differently, they should possess the
same moral status. This implies that if AI systems were able to demonstrate
self-awareness, experience pain, understanding themselves, and form
relationships, they should be granted moral considerations and have moral
status. The second principle is the principle of Ontogeny Non-
Discrimination, which asserts that if two entities exhibit the same
functionality and conscious experience but differ in the way they have been
brought to existence, they should have the same moral status. This principle
complements the principle of Substrate Non-Discrimination, as it emphasizes
the functional and conscious aspects that should be the factors that
determine the moral status of beings. In the context of artificial intelligence,
if AI systems were able to attain the same cognitive faculties and the
existential experiences as human beings, it deserves the same moral
treatment and rights, regardless of how it came into existence.
2.2. Conceptual frameworks:
In this section, the study shall define some key concepts that are related to
the topic. These concepts constitute the foundation of the objectives of the
study:
*Artificial Intelligence: refers to the capability of computational
systems to perform tasks typically associated with human intelligence, such
as learning, reasoning, problem-solving, perception, and decision-making. It
is a field of research in computer science that develops and studies methods
and software that enable machines to perceive their environment and
use learning and intelligence to take actions that maximize their chances of
achieving defined goals.[1] Such machines may be called AIs. (Artificial
Intelligence. (2025, March 26). In Wikipedia.
https://en.wikipedia.org/wiki/Artificial_intelligence)
2.2. Theoretical frameworks:
In this section, the study shall introduce the theoretical frameworks,
including various theories that will be utilized to accurately analyze the
discourse of tech leaders about artificial intelligence. These theories will help
us uncover the persuasive strategies that are used by the tech leaders,
ideological structures, and rhetorical techniques that are embedded in their
discourse.
2.2.1. the persuasion theory:
Aristotle’s Appeals, Ethos, Pathos, and Logos , assists in examining
how tech leaders use communication to influence others’ perception on
artificial intelligence adoption.
Logos: refers to a rhetorical mode that depends on rationality and reason. It
is used to establish credibility with the audience, employing logical
argumentations, facts, data, or statistics. This seeks to persuade the
audience to accept, embrace, or support something.
Pathos: refers to a rhetorical appeal to emotion. It is when the speaker aims
to invoke emotion, such as fear, happiness, hope, or empathy to convince the
audience.
Ethos: refers to a rhetorical mode that relies on establishing credibility and
authority. When the speaker uses Ethos, he wants to demonstrate
trustworthiness, expertise and moral values so that he can persuade his
audience.
2.2.2. Framing theory:
Framing theory is a theory developed by Goffman that posits that the
way by which a piece of information is presented or framed influences how
the audience decode and respond to it. The theory seeks to uncover how
different factors influence the perception of the public and the meaning they
extract from the messages.
Framing theory, in the context of AI discourse, will serve us to analyze the
frames that the tech leaders use to shape the public perceptions on artificial
intelligence, focusing on innovation, ethical responsibility, the necessity of AI,
and the economic benefits while attempting to justify the risks that AI can
cause regarding bias, privacy, and misuse. By examining these frames, we
can have a deep understanding of how AI is positioned as a necessary and
revolutionary tool.
2.3 Conclusion:
To summarize, this chapter explores the existing academic works that are
conducted on artificial intelligence. By scrutinizing these works, it has been
found that there is a noticeable gap that exists in literature regarding the
tech leader’s discourse on artificial intelligence. And also, the study defined
some key concepts that pertain to the study topic. Finally, the overview
identified the theoretical frameworks that will be employed in this study.