KEMBAR78
Generative AI - Responsible Path Forward.pdf
Generative AI:
Responsible Path Forward
Dr. Saeed Aldhaheri
Director, Center for Futures Studies,
University of Dubai
DataHour Series Webinar, by Analytics Vidhya, 19 Oct. 2023
Embracing our humanity
“The human spirit must prevail over technology.”
- Albert Einstein
Potential of Generative AI
• What is Generative AI:
• LLMs models that generate text, images, code, videos, music,, etc
• Potential to be used in almost any industry
• Economic Potential
• 40% of all working hours across all industries can be impacted by
LLMs such as ChatGPT. Accenture report 2023
• Knowledge workers gain 40% performance boost from GPT-4. BCG
study
• Benefits:
• Augmenting human capabilities – < Improve efficiency and
productivity
• Democratizing AI, creativity and imagination
• Uses
• Generating arts/writing articles & software/generating ideas/ task
automation/Chatbots/disrupting search
• Consumer sentiment/Marketing/finance/health care/customer
service/NLP-based data analytics/Education/law/
Stable Diffusion ERNIE
Question?
How can we responsibly govern generative AI
and address its ethical issues?
35%of global consumers trust how AI is being implemented by organizations.
- Accenture’s 2022 Tech Vision research
With new capabilities comes new risks
Generative AI incidents are expected to increase
• # of incidents has increased x26 times since
2012
• Expected to far more than double in 2023.
• GAI is causing real societal harm today
• We need to build a social infrastructure
necessary to bend the curve downward
AI Index Report 2023, Stanford University
Ethical issues of generative AI
Ethical Risks of Generative AI
• Safety and generating harmful content
• Bias
• Fake news and disinformation
• “hallucination” – faking things
• Transparency, privacy and data protection
• Human manipulation
• Intellectual property & copyright infringement
• Liability & Responsibility
• Societal values
• Unemployment and workforce displacement
AI ethics is even more important for Generative AI
Sustainability paradox: How to address AI’s Carbon
Footprint
• the use of AI technology across all sectors produces CO2
emissions at a level comparable to the aviation industry
• Google reported AI ate up 10 – 15%
(i.,e 2.3 terawatt hours) of its annual electricity
consumption in 2021. Carbon Emissions and Large Neural Network Training, Arxiv
• Training OpenAI’s GPT-3 consumed up 1.287 gigawatt
hours (equivalent to 120 US homes consumption in a
year) and 185,000 gallons of fresh water
“The water footprint of AI training and inference, including
ChatGPT, is equivalent to cooling a nuclear reactor.” Making AI Less
"Thirsty": Uncovering and Addressing the Secret Water Footprint of AI Models, Arxiv, 2023
Current AI Tech Industry Approach
• Generative AI is a Wild West now
• AI ethics in the back seat
• “Researchers building AI outnumber those focused on safety by a 30-to-1 ratio”
- Center for Humane Technology
• Moving fast while breaking things
• “It’s important *NOT* to ‘move fast and break things’ for tech as important as AI,”
- Demis Hassabis, DeepMind Founder
• Tech firms call for more oversight through int. guidance and voluntary
commitment
• Public trust in generative AI is decreasing Image by Tim Bel from Pixabay
Towards Human-Centered Responsible AI
• Current AI problems are not technical only but
sociotechnical
• Need to move from algorithmic-centred to human-
centred perspective
Many Frameworks for AI Ethics
• Fairness: address bias in data and
models
• Explainability/Interpretability:
explain how an AI model makes
decisions?
• Privacy: protecting individual
privacy, anonymizing personal data
• Robustness and security:
performance & security
• Governance: policy, controls &
regulations to ensure accountability
and responsible development & use
OECD AI Principles, 2019
• Inclusive growth, SD and well-being
• Human centered values and fairness
• Transparency and exaplainability
• Robustness, security and safety
• accountability
How to build responsible generative AI?
• Ethics by design and responsible AI by design
• actionable AI ethics principles –< responsible product development
• Translate principles into effective governance
• development, deployment, and use
• Use technical tools for Responsible AI
• Develop new methods for risk assessment
• Data-related risks: safe and inclusive data set
• Model-related risks: guardrails or classifiers to eliminate models’ harmful/toxic output
• Team-related risks: diverse team
• Testing and transparency
• 3rd party auditing and red teams
• Human feedback mechanisms
• Establishing responsible culture
• Responsible AI must be CEO-led
• Development of mature responsible AI capabilities
• If an organization’s culture doesn’t respect risk management, risk management doesn’t
work.
• Addressing public discomfort around AI and seek social license
• Responsibility: responsible AI practices
• Benefits: the benefits it brings to society outweigh the cost
• Social contract: gains acceptance and trust from the wider society
Microsoft open-source responsible AI toolbox
Datasets problem: main source for AI ethical issues
• Responsible Data governance is a must
• Understand the dataset's origins, development,
intent, and ethical considerations
• Check if the data is obtained legally and ethically
• Is the data biased, and if so, how it can be
mitigated?
• Does the data compromise privacy and copyrights
• Tools available to analyze datasets for bias
• Data Cards: structured summaries of essential facts
about datasets needed by stakeholders across a
dataset's lifecycle for responsible AI development.
Responsible Data & Responsible Data Science
• RD: collective duty to prioritise and respond to the ethical,
legal, social and privacy-related challenges that come from
using data in new and different ways in advocacy and
social change
• RDS: delivers a comprehensive, practical treatment of how
to implement data science solutions in an ethical manner
that minimizes the risk of undue harm to vulnerable
members of society
• Question the data, understand its limitations, and use it in
a responsible way
• Responsible Data culture
• Data literacy should be a priority
How to use AI responsibly in organizations?
• Develop and approve AI Responsible use Policy
• Align with org values and goals
• Prioritize fairness, transparency, and accountability
• Establish oversight and governance framework
• Address data practices
• Outline impact assessments
• Create feedback mechanisms
• Mandate documentation and reporting
• Assign responsibility
• Continual review and update
Responsible AI Licensing (RAIL)
• Promote the use of AI responsibly
• Volunteer community
• For open license ML models
• Developers to restrict the use of their AI to prevent
irresponsible or harmful application
• New type of open license: combines elements of permissive
open source licenses with usage restrictions based on ethical
consideration
• Over 8000 RAIL licenses on hugging Face
• Limited impact
• Need to enforce RAIL for commercial AI development
Effective AI Governance increases the quality of AI Systems
• Responsible AI Governance is a must
• Focus on risk mitigations only overlooks the need for
regulation and responsible governance to promote
innovation
• Failure rate of AI systems was reduced by 9% in
companies that achieve RAI before scaling compared
to those that don’t. BCG study
• Effective AI governance promotes safety race to the
top
• Increases trust in orgs developing and deploying AI
systems
How to build responsible generative AI?
“To be responsible by design, organizations
need to move from a reactive compliance
strategy to the proactive development of
mature Responsible AI capabilities through a
framework that includes principles and
governance; risk, policy and control;
technology and enablers and culture and
training.”
AI Standards to be used in auditing AI systems
• Auditing AI system gives assurance to developers and users
• ISO/IEC 22989 – 2022: Information technology – Artificial
Intelligence – AI concepts and terminology
• Trustworthiness, Societal impact
• AI system lifecycle, AI Stakeholders
• ISO/IEC 23053 – 2022: Framework for Artificial Intelligence (AI)
Systems Using Machine Learning (ML)
• Clear definitions for System components and their function in the AI
system
• ISO/IEC 23894 – 2022: Information technology — Artificial
intelligence — Guidance on risk management
• ISO/IEC TR 24028:2020 Information technology — Artificial
intelligence — Overview of trustworthiness in artificial
intelligence
• NIST AI Risk Management Framework
• voluntary use
• improve trustworthiness in AI products
• Potential harm: people, organization, system/ecosystem
• IEEE
• Autonomous systems standards – ethically aligned design
• Ethically aligned design – IEEECertifAId
• Transparency of autonomous systems
Responsible AI By Design: Telefonica Case Example
• Telefonica: Responsible AI by design
• Telefonica AI principles
• Fair AI
• Transparent and Explainable AI
• Human-Centric AI
• Privacy and Security by Design
• A questionnaire to ensure AI principles have
been considered in the development process
• RAI Tools to help answer some of the questions
• Awareness and training (technical & non-
technical)
• A governance model assigning responsibilities
and accountabilities
• Agile governance model
• Leadership endorsement
• Challenge: trade off between innovation speed
and risks Responsible AI by design approach used by Telefonica
Ethical AI Practice Maturity Model: Salesforce Example
A global Call for AI Regulations
• We are witnessing a grand regulatory experiment
• Tech agenda of risk to human extension hinders the effect of real societal harms AI is
causing now
• EU policymakers call on a global summit on AI regulation
• The UN calls for creation of a UN body to regulate AI
• UK to host a global summit on AI safety
• EU AI Act (AIA)
• Risk-based approach in relations to use cases
• Training data must not violate GDPR and copyright laws
• China released rules for Generative AI
• Balance tech advancement but adhere to core values of socialism
• GAI services need license to operate
• UK AI regulation policy
• “light touch” approach
• Sector specific-approach
• 6 cross-sectorial AI governance principles
• US Chamber of Commerce calls for AI regulation
• Hands-off approach = industry-self regulation
• Voluntary AI Safety standards
• Governments need to build capability and capacity
AI Robots are warning us!
‘We should be cautions about
the future development of AI.
Urgent discussion is needed.’
AI – DA, humanoid robot artist
Humanoid robots in press conference in the UN AI for Good Global
Summit in Geneva, hosted by ITU, July 2023
Thank you

Generative AI - Responsible Path Forward.pdf

  • 1.
    Generative AI: Responsible PathForward Dr. Saeed Aldhaheri Director, Center for Futures Studies, University of Dubai DataHour Series Webinar, by Analytics Vidhya, 19 Oct. 2023
  • 2.
    Embracing our humanity “Thehuman spirit must prevail over technology.” - Albert Einstein
  • 3.
    Potential of GenerativeAI • What is Generative AI: • LLMs models that generate text, images, code, videos, music,, etc • Potential to be used in almost any industry • Economic Potential • 40% of all working hours across all industries can be impacted by LLMs such as ChatGPT. Accenture report 2023 • Knowledge workers gain 40% performance boost from GPT-4. BCG study • Benefits: • Augmenting human capabilities – < Improve efficiency and productivity • Democratizing AI, creativity and imagination • Uses • Generating arts/writing articles & software/generating ideas/ task automation/Chatbots/disrupting search • Consumer sentiment/Marketing/finance/health care/customer service/NLP-based data analytics/Education/law/ Stable Diffusion ERNIE
  • 4.
    Question? How can weresponsibly govern generative AI and address its ethical issues? 35%of global consumers trust how AI is being implemented by organizations. - Accenture’s 2022 Tech Vision research
  • 5.
    With new capabilitiescomes new risks
  • 6.
    Generative AI incidentsare expected to increase • # of incidents has increased x26 times since 2012 • Expected to far more than double in 2023. • GAI is causing real societal harm today • We need to build a social infrastructure necessary to bend the curve downward AI Index Report 2023, Stanford University
  • 7.
    Ethical issues ofgenerative AI Ethical Risks of Generative AI • Safety and generating harmful content • Bias • Fake news and disinformation • “hallucination” – faking things • Transparency, privacy and data protection • Human manipulation • Intellectual property & copyright infringement • Liability & Responsibility • Societal values • Unemployment and workforce displacement AI ethics is even more important for Generative AI
  • 8.
    Sustainability paradox: Howto address AI’s Carbon Footprint • the use of AI technology across all sectors produces CO2 emissions at a level comparable to the aviation industry • Google reported AI ate up 10 – 15% (i.,e 2.3 terawatt hours) of its annual electricity consumption in 2021. Carbon Emissions and Large Neural Network Training, Arxiv • Training OpenAI’s GPT-3 consumed up 1.287 gigawatt hours (equivalent to 120 US homes consumption in a year) and 185,000 gallons of fresh water “The water footprint of AI training and inference, including ChatGPT, is equivalent to cooling a nuclear reactor.” Making AI Less "Thirsty": Uncovering and Addressing the Secret Water Footprint of AI Models, Arxiv, 2023
  • 9.
    Current AI TechIndustry Approach • Generative AI is a Wild West now • AI ethics in the back seat • “Researchers building AI outnumber those focused on safety by a 30-to-1 ratio” - Center for Humane Technology • Moving fast while breaking things • “It’s important *NOT* to ‘move fast and break things’ for tech as important as AI,” - Demis Hassabis, DeepMind Founder • Tech firms call for more oversight through int. guidance and voluntary commitment • Public trust in generative AI is decreasing Image by Tim Bel from Pixabay
  • 10.
    Towards Human-Centered ResponsibleAI • Current AI problems are not technical only but sociotechnical • Need to move from algorithmic-centred to human- centred perspective
  • 11.
    Many Frameworks forAI Ethics • Fairness: address bias in data and models • Explainability/Interpretability: explain how an AI model makes decisions? • Privacy: protecting individual privacy, anonymizing personal data • Robustness and security: performance & security • Governance: policy, controls & regulations to ensure accountability and responsible development & use OECD AI Principles, 2019 • Inclusive growth, SD and well-being • Human centered values and fairness • Transparency and exaplainability • Robustness, security and safety • accountability
  • 12.
    How to buildresponsible generative AI? • Ethics by design and responsible AI by design • actionable AI ethics principles –< responsible product development • Translate principles into effective governance • development, deployment, and use • Use technical tools for Responsible AI • Develop new methods for risk assessment • Data-related risks: safe and inclusive data set • Model-related risks: guardrails or classifiers to eliminate models’ harmful/toxic output • Team-related risks: diverse team • Testing and transparency • 3rd party auditing and red teams • Human feedback mechanisms • Establishing responsible culture • Responsible AI must be CEO-led • Development of mature responsible AI capabilities • If an organization’s culture doesn’t respect risk management, risk management doesn’t work. • Addressing public discomfort around AI and seek social license • Responsibility: responsible AI practices • Benefits: the benefits it brings to society outweigh the cost • Social contract: gains acceptance and trust from the wider society Microsoft open-source responsible AI toolbox
  • 13.
    Datasets problem: mainsource for AI ethical issues • Responsible Data governance is a must • Understand the dataset's origins, development, intent, and ethical considerations • Check if the data is obtained legally and ethically • Is the data biased, and if so, how it can be mitigated? • Does the data compromise privacy and copyrights • Tools available to analyze datasets for bias • Data Cards: structured summaries of essential facts about datasets needed by stakeholders across a dataset's lifecycle for responsible AI development.
  • 14.
    Responsible Data &Responsible Data Science • RD: collective duty to prioritise and respond to the ethical, legal, social and privacy-related challenges that come from using data in new and different ways in advocacy and social change • RDS: delivers a comprehensive, practical treatment of how to implement data science solutions in an ethical manner that minimizes the risk of undue harm to vulnerable members of society • Question the data, understand its limitations, and use it in a responsible way • Responsible Data culture • Data literacy should be a priority
  • 15.
    How to useAI responsibly in organizations? • Develop and approve AI Responsible use Policy • Align with org values and goals • Prioritize fairness, transparency, and accountability • Establish oversight and governance framework • Address data practices • Outline impact assessments • Create feedback mechanisms • Mandate documentation and reporting • Assign responsibility • Continual review and update
  • 16.
    Responsible AI Licensing(RAIL) • Promote the use of AI responsibly • Volunteer community • For open license ML models • Developers to restrict the use of their AI to prevent irresponsible or harmful application • New type of open license: combines elements of permissive open source licenses with usage restrictions based on ethical consideration • Over 8000 RAIL licenses on hugging Face • Limited impact • Need to enforce RAIL for commercial AI development
  • 17.
    Effective AI Governanceincreases the quality of AI Systems • Responsible AI Governance is a must • Focus on risk mitigations only overlooks the need for regulation and responsible governance to promote innovation • Failure rate of AI systems was reduced by 9% in companies that achieve RAI before scaling compared to those that don’t. BCG study • Effective AI governance promotes safety race to the top • Increases trust in orgs developing and deploying AI systems
  • 18.
    How to buildresponsible generative AI? “To be responsible by design, organizations need to move from a reactive compliance strategy to the proactive development of mature Responsible AI capabilities through a framework that includes principles and governance; risk, policy and control; technology and enablers and culture and training.”
  • 19.
    AI Standards tobe used in auditing AI systems • Auditing AI system gives assurance to developers and users • ISO/IEC 22989 – 2022: Information technology – Artificial Intelligence – AI concepts and terminology • Trustworthiness, Societal impact • AI system lifecycle, AI Stakeholders • ISO/IEC 23053 – 2022: Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML) • Clear definitions for System components and their function in the AI system • ISO/IEC 23894 – 2022: Information technology — Artificial intelligence — Guidance on risk management • ISO/IEC TR 24028:2020 Information technology — Artificial intelligence — Overview of trustworthiness in artificial intelligence • NIST AI Risk Management Framework • voluntary use • improve trustworthiness in AI products • Potential harm: people, organization, system/ecosystem • IEEE • Autonomous systems standards – ethically aligned design • Ethically aligned design – IEEECertifAId • Transparency of autonomous systems
  • 20.
    Responsible AI ByDesign: Telefonica Case Example • Telefonica: Responsible AI by design • Telefonica AI principles • Fair AI • Transparent and Explainable AI • Human-Centric AI • Privacy and Security by Design • A questionnaire to ensure AI principles have been considered in the development process • RAI Tools to help answer some of the questions • Awareness and training (technical & non- technical) • A governance model assigning responsibilities and accountabilities • Agile governance model • Leadership endorsement • Challenge: trade off between innovation speed and risks Responsible AI by design approach used by Telefonica
  • 21.
    Ethical AI PracticeMaturity Model: Salesforce Example
  • 22.
    A global Callfor AI Regulations • We are witnessing a grand regulatory experiment • Tech agenda of risk to human extension hinders the effect of real societal harms AI is causing now • EU policymakers call on a global summit on AI regulation • The UN calls for creation of a UN body to regulate AI • UK to host a global summit on AI safety • EU AI Act (AIA) • Risk-based approach in relations to use cases • Training data must not violate GDPR and copyright laws • China released rules for Generative AI • Balance tech advancement but adhere to core values of socialism • GAI services need license to operate • UK AI regulation policy • “light touch” approach • Sector specific-approach • 6 cross-sectorial AI governance principles • US Chamber of Commerce calls for AI regulation • Hands-off approach = industry-self regulation • Voluntary AI Safety standards • Governments need to build capability and capacity
  • 23.
    AI Robots arewarning us! ‘We should be cautions about the future development of AI. Urgent discussion is needed.’ AI – DA, humanoid robot artist Humanoid robots in press conference in the UN AI for Good Global Summit in Geneva, hosted by ITU, July 2023
  • 24.