KEMBAR78
Digital Ethics | PDF
Digital Ethics
Ethical 'now' for a resilient 'next'
April 2021
ii
Brochure / report title goes here 
| Section title goes here
Content
Introducing digital ethics	 01
Why is a discussion on digital ethics important? 	 03
Boundaries for digital ethics	 04
Impact of digital ethics	 05
Key recommendations for introducing digital ethics	 06
Digital Ethics
01
A leading educational institution has been working on its historical data on admissions to understand admission
patterns and see if they can develop a model. This data model will train a system to augment the decisions made by the
administrators on potential candidates to attract and retain the best talent. The model also looks at including data on
dropouts during the course, so they can identify such candidates in advance and put them through counselling sessions.
One of the main challenges the institution has is to eliminate any bias that the data might introduce in the system on admission
criteria. As the data is more than a 100 years old, it also includes times when certain students were denied admission based on
their gender or race.
As this is an ethical dilemma, the university has appointed a special team to parse the data and ensure that these historical social
biases do not enter the new system.
Introducing digital ethics
Digital ethics are inter-personal, social, organisational, national
norms that govern how people/digital users should conduct
and behave in the digital world. It is a paradigm in which
digital transformation is immune to the moral biases of those
running the transformation. It also means that we do not
allow machines to discriminate and upturn the ethical values
in our society. Digital ethics works both ways from humans to
machines and from machines to humans.
Digital Ethics
02
Areas of discussion
The impact of technology
This includes how technology is changing the way
we do business, interact, and live. In this current
technological era, many decisions are taken
with inputs from artificial intelligence and other
automated decision-making systems, especially in
cases where structured data for decision-making
was available.
This data was used to train smart algorithms to
replicate human decision-making processes. There
is a possibility that human biases involved in the
decision-making process were transferred to the
machines, which is one of the biggest concern areas
in digital ethics, today.
Industry best practices of technology ethics
As is clear from the leading educational institution
example, it is very important to put controls to
prevent ethical biases, which can contaminate data
used to train digital models. These best practices
may include keeping in mind the source of the data,
prevailing socio-economic conditions at that time,
fields of data that might introduce ethical biases,
and creating a multi-disciplinary committee to
review these digital programmes.
Risks emerging from digital ethics
If there are ethical biases in the digital model,
there is a fear of risks emerging from reputational
loss and operational risks. Organisations would
need to derive a framework for digital ethics. They
must ensure that current ethical practices and
policies governing the organisation are applied to
the framework, to ensure a holistic view of ethics
governing their digital initiatives.
Role of organisations in propagating technology
ethics
How are organisations working both internally and
within the ecosystem to propagate the system of
ethics for digital transformation? Most organisations
might start from within, and they would ultimately
have to create an ecosystem to ensure that the
industry is following best practices when it comes to
digital ethics.
We need to look at the issue of digital ethics from the following four angles:
Digital Ethics
03
Why is a discussion on digital
ethics important?
Today, digital transformation is the biggest driver of growth
for organisations. Organisations are continuously focusing on
implementing strategies for a better customer experience,
operational efficiency, employee engagement, and new
business models.
In this paradigm, there is focus on people, processes, and
technologies. While the growth in technology is unprecedented,
what is interesting is how the change in processes and people has
ensured that organisations get the most out of their investments.
Ethical management of this process affects autonomy and
honour/dignity/respect of people in the digital world. As the
boundaries between the digital and the real world continue
to blur, this turn will have a huge impact on the real world of a
person.
Let us say that Bank A decides to use machine learning to
decide who is accepted for a loan. The machine learning will
need training on a data set, which could be historical data or
user created data. If, historically, the bank has denied loans
to a certain category of people, the same bias would carry
forward to the machine. So essentially, we have transferred
our bias to the system and now the system would deny loans
to that certain category of people. If we use a user to create
the data to train the system, he might introduce this bias
himself to ensure that certain category of people was
denied loans. Both these cases fall under the purview of
digital ethics.
In a recent speech1
, Masayoshi Son of Softbank said that in
the near future the earth would be co-inhabited by humans
and machines. We may soon see a world with 10 billion people
and 10 billion robots. This, in other terms, is singularity, where
each robot is connected to another robot. In that context, this
becomes even more important as we may end up transferring
our local biases to the machines. Those biases will permanently
render some individuals outside the purview of services and
facilities rendered by these robots.
1 Amie Tsang and Michael J. de la Merced, Masayoshi Son warns of the singularity, CNBC
Digital Ethics
04
A simple cookie in a mobile application or website gives the
application administrator enormous power. This data is often
misused for various reasons, such as profiling individuals,
selling their data to other organisations, or propagating illegal
activities.
A good example is of cab aggregator services. Based on the
user behaviour and travel pattern, the cab aggregator is able
to provide a great user experience. However, of late it seems
that this is only to ensure that more is extracted out of the
customer, especially if the profile shows that he or she is not
Organisations would need a moral code of conduct inside their digital policies to address the areas of concern and develop a fine
balance to ensure that boundaries are clearly laid out.
averse to using cabs with a higher charge. Therefore, even if a
cheaper option is available it would not be available to the user,
as he or she has already shown his or her preference for using
higher value cabs.
Now, this example clearly explains an ethical grey area. Using
a larger cab (while it increases revenue) results in a higher cost
for the customer, society, and environment.
The boundaries for digital ethics exist at the following three
levels:
Boundaries for digital ethics
Ethics for the
consumer
Ethics as it relates
to the provider of
digital services
Ethics governing
the use of the data
generated by this
digital interaction
Digital Ethics
05
Impact of digital ethics
We believe that digital ethics is addressed at the following three levels:
Impact on society
Let us take the example of social media. Today, news
travels fast and social media is a useful tool to monitor
the situation of natural disasters. However, it is also
used to spread fake news. This is where the ethical
management of digital media comes into play. What
framework do media and other organisations have to
ensure that the reported news is true and original?
While citizen journalism is on the rise, how can citizens
be educated on the importance of digital ethics and
consent before they report unverified content and
create sensationalism?
At a certain university, a machine-learning algorithm
was introduced to select students in the initial screening
processes. Due to their rising applications, this was
considered a milestone for this university. The board
felt that academic staff needed technological support to
reduce their burden and simultaneously skim through
the applications to ensure that only the best were
admitted to the university. Surprisingly, the number of
women candidates shortlisted by the algorithm were
way lower than male candidates. It was also lower than
the number of women who had enrolled in the previous
year. This was an area of concern for the university. The
university then went back and examined the criteria of
selection.
Due to the lack of a digital ethics framework, the
designers of the algorithm did not factor in the
adjustments to be made to the historical data. The
university had been a male-only institution for a very
long time. Though this had changed around 40 years ago,
the historical data used for the algorithm was almost 100
years old and created a scenario where fewer women
were shortlisted. The university admitted its error and
formed an ethics committee to review the project and
remove inherent biases from their data.
Impact on the nation
While the earlier two examples were limited in
their scope, there is always a bigger impact on
the nation when it comes to larger programmes.
A good example is the ‘Smart Cities Programme’,
where data was collected from all public utilities.
In Vishakhapatnam, civic authorities can identify
the levels of garbage in the bins across the city
using sensors, which allows them to map the
route accordingly. This is a good example of using
digital for route optimisation, but unless the data
is protected, anonymised, and treated in real-time
the authorities will continue to be biased amongst
various regions in the city. As this data is available
nationally, it has the tendency to skew national
numbers as well.
Impact on the individual
A good example of this is personalised medicine. We see
many individuals with wearable devices that help them
measure their physical activities and vital signs. Today,
there are portable devices with six leads ECG capability.
According to Eric Topol2
, this has created a paradigm for
data democracy where the patient is at the centre of
their data. How is the device manufacturer monitoring
the individual’s data? Does the manufacturer have a
digital ethics framework of how he is going to use the
data? If yes, is it governed by confidentiality laws? What
would take the upper hand, the ethical charter or the
law? What would happen if the person wearing the device
suffered a heart attack? What would be the protocol for
informing the hospital or his family?
02
03
01
2 Eric J. Topol, The Patient Will See You Now: The Future of Medicine is in Your Hands
Digital Ethics
06
Key recommendations for
introducing digital ethics
Governments worldwide have realised that managing digital
ethics is key to making the most out of digital investments.
The European Union has started creating a list of digital
ethics recommendations that it would like organisations and
governments in EU to abide by3
. They have clearly stated
that digital ethics is not an add-on, but an integral part of
governance for any digital programme. They have started
putting together an expert committee of 52 professionals from
organisations, such as Google, SAP, Bayer, Santander, etc.
The Australian Government is working on a similar policy
to ensure that AI and digital are developed and deployed
responsibly.
Some of our recommendations from this perspective include
the following:
Create a committee for digital
ethics

The committee should be a cross-
functional team with business,
technology, and community
experts whose objective is to
address all ethical concerns. This
committee should roll into the
organisation’s ethics committee,
which would form the overall
framework for ethics in the
organisation.
Draft the policy on digital ethics

While drafting the policy on digital
ethics, it is important to cover all
digital programmes. Like the digital
risk framework, the digital ethics
policy should draw heavily from the
organisation’s vision and mission and
from risk policies. The policy should
cover the impact at an individual,
society, market, and national level.
Ensure adherence

All digital projects need to
be covered and assessed
from the digital ethics
perspective.
Emphasise on ethics

Make ethics an important part of the
digital governance of all projects.
Impart education

Ensure that teams are constantly
educated on the need for the right
ethics and constant reinforcement
and assessment of the individuals
themselves is conducted.
3 Foo Yun Chee, AI must be accountable, EU says as it sets ethical guidelines, Reuters
Digital Ethics
07
Individual bias
Ethics perspective:
Introduction or extension of bias by
the individual creating the data
Organisational misuse
Ethics perspective:
The moral and societal implications of
algorithmic profiling
Ethics perspective:
•		 Potential of data biases to
impact treatment given to
different regions
Ethics perspective:
•		 Ethical management of
digital media
•		 Impact of ethical biases
in university admissions
processes
Ethics perspective:
•		 Magnitude of data being
monitored and utilised
•		 Confidentiality laws and their
implications
Technology
misuse
Technology
use
To summarise, let’s look at the image below. The technology misuse, both from an individual as well an organisational bias is
important to be identified and understood.
Impact on
the nation
Impact on
society
Impact on
the
individual
Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company
limited by guarantee (“DTTL”), its network of member firms, and their related entities. DTTL and
each of its member firms are legally separate and independent entities. DTTL (also referred to as
“Deloitte Global”) does not provide services to clients. Please see www.deloitte.com/about for a
more detailed description of DTTL and its member firms.
This material is prepared by Deloitte Touche Tohmatsu India LLP (DTTILLP). This material
(including any information contained in it) is intended to provide general information on a
particular subject(s) and is not an exhaustive treatment of such subject(s) or a substitute to
obtaining professional services or advice. This material may contain information sourced from
publicly available information or other third party sources. DTTILLP does not independently
verify any such sources and is not responsible for any loss whatsoever caused due to reliance
placed on information sourced from such sources. None of DTTILLP, Deloitte Touche Tohmatsu
Limited, its member firms, or their related entities (collectively, the “Deloitte Network”) is, by
means of this material, rendering any kind of investment, legal or other professional advice
or services. You should seek specific advice of the relevant professional(s) for these kind of
services. This material or information is not intended to be relied upon as the sole basis for any
decision which may affect you or your business. Before making any decision or taking any action
that might affect your personal finances or business, you should consult a qualified professional
adviser.
No entity in the Deloitte Network shall be responsible for any loss whatsoever sustained by any
person or entity by reason of access to, use of or reliance on, this material. By using this material
or any information contained in it, the user accepts this entire notice and terms of use.
©2021 Deloitte Touche Tohmatsu India LLP. Member of Deloitte Touche Tohmatsu Limited
Connect with us
Key contributors
Deloitte Touche Tohmatsu India LLP
Dr. Vikram Venkateswaran
Bangalore Chamber of Industry and Commerce
Manas Dasgupta
Rohit Mahajan
President-Risk Advisory
Deloitte Touche Tohmatsu India LLP
rmahajan@deloitte.com
Vishal Jain
Partner, Risk Advisory
Deloitte Touche Tohmatsu India LLP
jainvishal@deloitte.com
Maninder Bharadwaj
Partner, Risk Advisory
Deloitte Touche Tohmatsu India LLP
manbharadwaj@deloitte.com

Digital Ethics

  • 1.
    Digital Ethics Ethical 'now'for a resilient 'next' April 2021
  • 2.
    ii Brochure / reporttitle goes here | Section title goes here Content Introducing digital ethics 01 Why is a discussion on digital ethics important? 03 Boundaries for digital ethics 04 Impact of digital ethics 05 Key recommendations for introducing digital ethics 06
  • 3.
    Digital Ethics 01 A leadingeducational institution has been working on its historical data on admissions to understand admission patterns and see if they can develop a model. This data model will train a system to augment the decisions made by the administrators on potential candidates to attract and retain the best talent. The model also looks at including data on dropouts during the course, so they can identify such candidates in advance and put them through counselling sessions. One of the main challenges the institution has is to eliminate any bias that the data might introduce in the system on admission criteria. As the data is more than a 100 years old, it also includes times when certain students were denied admission based on their gender or race. As this is an ethical dilemma, the university has appointed a special team to parse the data and ensure that these historical social biases do not enter the new system. Introducing digital ethics Digital ethics are inter-personal, social, organisational, national norms that govern how people/digital users should conduct and behave in the digital world. It is a paradigm in which digital transformation is immune to the moral biases of those running the transformation. It also means that we do not allow machines to discriminate and upturn the ethical values in our society. Digital ethics works both ways from humans to machines and from machines to humans.
  • 4.
    Digital Ethics 02 Areas ofdiscussion The impact of technology This includes how technology is changing the way we do business, interact, and live. In this current technological era, many decisions are taken with inputs from artificial intelligence and other automated decision-making systems, especially in cases where structured data for decision-making was available. This data was used to train smart algorithms to replicate human decision-making processes. There is a possibility that human biases involved in the decision-making process were transferred to the machines, which is one of the biggest concern areas in digital ethics, today. Industry best practices of technology ethics As is clear from the leading educational institution example, it is very important to put controls to prevent ethical biases, which can contaminate data used to train digital models. These best practices may include keeping in mind the source of the data, prevailing socio-economic conditions at that time, fields of data that might introduce ethical biases, and creating a multi-disciplinary committee to review these digital programmes. Risks emerging from digital ethics If there are ethical biases in the digital model, there is a fear of risks emerging from reputational loss and operational risks. Organisations would need to derive a framework for digital ethics. They must ensure that current ethical practices and policies governing the organisation are applied to the framework, to ensure a holistic view of ethics governing their digital initiatives. Role of organisations in propagating technology ethics How are organisations working both internally and within the ecosystem to propagate the system of ethics for digital transformation? Most organisations might start from within, and they would ultimately have to create an ecosystem to ensure that the industry is following best practices when it comes to digital ethics. We need to look at the issue of digital ethics from the following four angles:
  • 5.
    Digital Ethics 03 Why isa discussion on digital ethics important? Today, digital transformation is the biggest driver of growth for organisations. Organisations are continuously focusing on implementing strategies for a better customer experience, operational efficiency, employee engagement, and new business models. In this paradigm, there is focus on people, processes, and technologies. While the growth in technology is unprecedented, what is interesting is how the change in processes and people has ensured that organisations get the most out of their investments. Ethical management of this process affects autonomy and honour/dignity/respect of people in the digital world. As the boundaries between the digital and the real world continue to blur, this turn will have a huge impact on the real world of a person. Let us say that Bank A decides to use machine learning to decide who is accepted for a loan. The machine learning will need training on a data set, which could be historical data or user created data. If, historically, the bank has denied loans to a certain category of people, the same bias would carry forward to the machine. So essentially, we have transferred our bias to the system and now the system would deny loans to that certain category of people. If we use a user to create the data to train the system, he might introduce this bias himself to ensure that certain category of people was denied loans. Both these cases fall under the purview of digital ethics. In a recent speech1 , Masayoshi Son of Softbank said that in the near future the earth would be co-inhabited by humans and machines. We may soon see a world with 10 billion people and 10 billion robots. This, in other terms, is singularity, where each robot is connected to another robot. In that context, this becomes even more important as we may end up transferring our local biases to the machines. Those biases will permanently render some individuals outside the purview of services and facilities rendered by these robots. 1 Amie Tsang and Michael J. de la Merced, Masayoshi Son warns of the singularity, CNBC
  • 6.
    Digital Ethics 04 A simplecookie in a mobile application or website gives the application administrator enormous power. This data is often misused for various reasons, such as profiling individuals, selling their data to other organisations, or propagating illegal activities. A good example is of cab aggregator services. Based on the user behaviour and travel pattern, the cab aggregator is able to provide a great user experience. However, of late it seems that this is only to ensure that more is extracted out of the customer, especially if the profile shows that he or she is not Organisations would need a moral code of conduct inside their digital policies to address the areas of concern and develop a fine balance to ensure that boundaries are clearly laid out. averse to using cabs with a higher charge. Therefore, even if a cheaper option is available it would not be available to the user, as he or she has already shown his or her preference for using higher value cabs. Now, this example clearly explains an ethical grey area. Using a larger cab (while it increases revenue) results in a higher cost for the customer, society, and environment. The boundaries for digital ethics exist at the following three levels: Boundaries for digital ethics Ethics for the consumer Ethics as it relates to the provider of digital services Ethics governing the use of the data generated by this digital interaction
  • 7.
    Digital Ethics 05 Impact ofdigital ethics We believe that digital ethics is addressed at the following three levels: Impact on society Let us take the example of social media. Today, news travels fast and social media is a useful tool to monitor the situation of natural disasters. However, it is also used to spread fake news. This is where the ethical management of digital media comes into play. What framework do media and other organisations have to ensure that the reported news is true and original? While citizen journalism is on the rise, how can citizens be educated on the importance of digital ethics and consent before they report unverified content and create sensationalism? At a certain university, a machine-learning algorithm was introduced to select students in the initial screening processes. Due to their rising applications, this was considered a milestone for this university. The board felt that academic staff needed technological support to reduce their burden and simultaneously skim through the applications to ensure that only the best were admitted to the university. Surprisingly, the number of women candidates shortlisted by the algorithm were way lower than male candidates. It was also lower than the number of women who had enrolled in the previous year. This was an area of concern for the university. The university then went back and examined the criteria of selection. Due to the lack of a digital ethics framework, the designers of the algorithm did not factor in the adjustments to be made to the historical data. The university had been a male-only institution for a very long time. Though this had changed around 40 years ago, the historical data used for the algorithm was almost 100 years old and created a scenario where fewer women were shortlisted. The university admitted its error and formed an ethics committee to review the project and remove inherent biases from their data. Impact on the nation While the earlier two examples were limited in their scope, there is always a bigger impact on the nation when it comes to larger programmes. A good example is the ‘Smart Cities Programme’, where data was collected from all public utilities. In Vishakhapatnam, civic authorities can identify the levels of garbage in the bins across the city using sensors, which allows them to map the route accordingly. This is a good example of using digital for route optimisation, but unless the data is protected, anonymised, and treated in real-time the authorities will continue to be biased amongst various regions in the city. As this data is available nationally, it has the tendency to skew national numbers as well. Impact on the individual A good example of this is personalised medicine. We see many individuals with wearable devices that help them measure their physical activities and vital signs. Today, there are portable devices with six leads ECG capability. According to Eric Topol2 , this has created a paradigm for data democracy where the patient is at the centre of their data. How is the device manufacturer monitoring the individual’s data? Does the manufacturer have a digital ethics framework of how he is going to use the data? If yes, is it governed by confidentiality laws? What would take the upper hand, the ethical charter or the law? What would happen if the person wearing the device suffered a heart attack? What would be the protocol for informing the hospital or his family? 02 03 01 2 Eric J. Topol, The Patient Will See You Now: The Future of Medicine is in Your Hands
  • 8.
    Digital Ethics 06 Key recommendationsfor introducing digital ethics Governments worldwide have realised that managing digital ethics is key to making the most out of digital investments. The European Union has started creating a list of digital ethics recommendations that it would like organisations and governments in EU to abide by3 . They have clearly stated that digital ethics is not an add-on, but an integral part of governance for any digital programme. They have started putting together an expert committee of 52 professionals from organisations, such as Google, SAP, Bayer, Santander, etc. The Australian Government is working on a similar policy to ensure that AI and digital are developed and deployed responsibly. Some of our recommendations from this perspective include the following: Create a committee for digital ethics The committee should be a cross- functional team with business, technology, and community experts whose objective is to address all ethical concerns. This committee should roll into the organisation’s ethics committee, which would form the overall framework for ethics in the organisation. Draft the policy on digital ethics While drafting the policy on digital ethics, it is important to cover all digital programmes. Like the digital risk framework, the digital ethics policy should draw heavily from the organisation’s vision and mission and from risk policies. The policy should cover the impact at an individual, society, market, and national level. Ensure adherence All digital projects need to be covered and assessed from the digital ethics perspective. Emphasise on ethics Make ethics an important part of the digital governance of all projects. Impart education Ensure that teams are constantly educated on the need for the right ethics and constant reinforcement and assessment of the individuals themselves is conducted. 3 Foo Yun Chee, AI must be accountable, EU says as it sets ethical guidelines, Reuters
  • 9.
    Digital Ethics 07 Individual bias Ethicsperspective: Introduction or extension of bias by the individual creating the data Organisational misuse Ethics perspective: The moral and societal implications of algorithmic profiling Ethics perspective: • Potential of data biases to impact treatment given to different regions Ethics perspective: • Ethical management of digital media • Impact of ethical biases in university admissions processes Ethics perspective: • Magnitude of data being monitored and utilised • Confidentiality laws and their implications Technology misuse Technology use To summarise, let’s look at the image below. The technology misuse, both from an individual as well an organisational bias is important to be identified and understood. Impact on the nation Impact on society Impact on the individual
  • 10.
    Deloitte refers toone or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee (“DTTL”), its network of member firms, and their related entities. DTTL and each of its member firms are legally separate and independent entities. DTTL (also referred to as “Deloitte Global”) does not provide services to clients. Please see www.deloitte.com/about for a more detailed description of DTTL and its member firms. This material is prepared by Deloitte Touche Tohmatsu India LLP (DTTILLP). This material (including any information contained in it) is intended to provide general information on a particular subject(s) and is not an exhaustive treatment of such subject(s) or a substitute to obtaining professional services or advice. This material may contain information sourced from publicly available information or other third party sources. DTTILLP does not independently verify any such sources and is not responsible for any loss whatsoever caused due to reliance placed on information sourced from such sources. None of DTTILLP, Deloitte Touche Tohmatsu Limited, its member firms, or their related entities (collectively, the “Deloitte Network”) is, by means of this material, rendering any kind of investment, legal or other professional advice or services. You should seek specific advice of the relevant professional(s) for these kind of services. This material or information is not intended to be relied upon as the sole basis for any decision which may affect you or your business. Before making any decision or taking any action that might affect your personal finances or business, you should consult a qualified professional adviser. No entity in the Deloitte Network shall be responsible for any loss whatsoever sustained by any person or entity by reason of access to, use of or reliance on, this material. By using this material or any information contained in it, the user accepts this entire notice and terms of use. ©2021 Deloitte Touche Tohmatsu India LLP. Member of Deloitte Touche Tohmatsu Limited Connect with us Key contributors Deloitte Touche Tohmatsu India LLP Dr. Vikram Venkateswaran Bangalore Chamber of Industry and Commerce Manas Dasgupta Rohit Mahajan President-Risk Advisory Deloitte Touche Tohmatsu India LLP rmahajan@deloitte.com Vishal Jain Partner, Risk Advisory Deloitte Touche Tohmatsu India LLP jainvishal@deloitte.com Maninder Bharadwaj Partner, Risk Advisory Deloitte Touche Tohmatsu India LLP manbharadwaj@deloitte.com