Skip to main content

A “crash-test” for the AI world: How to navigate the road to responsible innovation

Responsible ethical AI Assessments

Imagine you are buying a car. You would want to know it is safe, right? We should have similar standards for AI. AI can detect diseases, impact whether we get a loan or predict when farmers should plant seeds. It has tangible impact on people and the environment.
 

Thoroughly conducted AI risk assessments are a moral responsibility. Similar to a successful crash test for cars, they can build much needed trust in AI products. This is especially relevant in an AI world that still struggles to include “crash tests” into its product development. In addition to supporting more ethical AI products, such assessments may also sensitize the many different stakeholders involved in AI product development on how to navigate the complex but also exciting field of AI ethics. A win-win situation. 
 

Ingredients for an AI risk assessment

Our team’s goal was clear: to create a practical AI risk assessment methodology that could guide us and others in the still-emerging field of AI ethics. We set out to achieve this goal with the following ingredients:  

  • A diverse and dedicated team, consisting of FAIR Forward, Eticas, and a Community of AI Inclusion Experts and project partners from Sub-Saharan Africa and Asia Pacific.  
  • Developing guides for AI analysis, both from a qualitative and quantitative perspective. Why? Because understanding the impact of AI is similar to clinical trials for vaccines. Researchers meticulously design trials to assess efficacy and negative side effects, based on a thorough understanding of the disease and the population being treated, before releasing the vaccine to the world. The same is true for AI: only when we understand the context, can we meaningfully test for bias and come up with effective mitigation measures.  
  • 7 innovative and diverse discriminative AI projects that were open to embark on a pilot test of the methodology, ranging from climate adaptation, agriculture to public service delivery.  

The outcome of this process: The Responsible AI Assessments.

You can access the Responsible AI Assessments here. The assessments are structured in three parts. You can access all three parts of the assessments using the following links. 

    1. Part A: Step-by-Step Guide
    2. Part B: Qualitative Guide
    3. Part C: Quantitative Guide
Responsible ethical AI assessments

How to navigate risks through assessments

Imagine a chatbot in Ghana, “GhanaBot” which aims to assist citizens with public service inquiries. Let’s embark on an exemplary journey of what a Responsible AI Assessment looks like with “GhanaBot”:  

  1. Charting Unknown Seas – Scoping Call

The Scoping Call aims to identify hidden currents, i.e. potential risks beneath the surface of “GhanaBot”. Together with the project team and its developers, we aim to understand: What is the chatbot’s mission? Who are its users? What data fuels it? The Scoping Call ensures that we are not sailing blindfolded.  

  1. Exploring the Currents – Deep Dive

The Deep Dive is all about investigating the identified risks thoroughly. Our main method: critical debates with the project team, but also local NLP experts, public service providers and citizens who might use “GhanaBot”. We get to understand its limitations and brainstorm ways how to navigate the currents for users.  

  1. Navigational Blueprint – Final Report

Our journey nears its end. Based on the Deep Dive’s insights, we fine-tune the discussed recommendations into specific actions that are tailored to “GhanaBot’s” implementation context. The final report is our blueprint to ensure a safer user journey.  

 

The Pros and Cons of our approach 

What you get from the Responsible AI Assessments:  

  1. Guiding the way: Responsible AI Assessments help AI developers and project managers to steer clearer of potential pitfalls and make informed decisions – like a navigator who relies on a compass to find their path. 
  2. Learning by doing: This is not simply dry paperwork. The assessments are practical learning experiences. By identifying risks, teams improve their understanding of AI and its impact. 
  3. Context sensitivity: The Responsible AI Assessments are a flexible template. You can tailor it to your specific context and AI use case.  
  4. Active discussion is key: AI risk assessments encourage critical reflection among diverse stakeholders to reveal blind spots and create tailored mitigation.  

Of course, the Responsible AI Assessments also come with their own challenges: 

  1. Expertise required: Assessing risks demands auditing and localized domain expertise – this includes the lived expertise of users and those impacted by AI. As a project team, we recommend that you get them around the table. 
  2. More than a checkbox: Assessing AI for risks is not a box-ticking exercise. It is an ongoing process that is just as good as the care and expertise you pour into it. Assessing AI risks requires continuous attention. One assessment will not do – the journey is the goal. 
  3. No magic silver bullet: Even when you do an assessment, they are no “harm free” guarantees. Rather they are a starting point toward this aspiration. Additional checks and balances must be put in place to mitigate risk and harm for the end users and beneficiaries of these AI products. 
  4. Be aware of limits: Although they were tested on diverse AI use cases, the Responsible AI Assessments would have to be adapted for new ones, e.g. generative AI or use of AI in health. More use cases will be covered in 2024’s second pilot phase! 

 

Interested in collaborating? Reach out!

You want to contribute to improving the assessments or have feedback that you would like to share? Please reach out via fairforward@giz.de. They are open-source – and we intend to further develop and co-create them!
You can access an editable version here.
 

Important disclaimers 


The Responsible AI Assessments are a method developed to conduct a holistic AI risk and ethics assessment. It can be used by any individual (applying it themselves), but it is highly recommended that the method is utilised with the expertise of external assessors or auditors. A Responsible AI Assessment does not qualify as a formal audit (in any form), nor does it replace an audit process. Use of the Responsible AI Assessments alone does not guarantee compliance with local and/or international laws, regulations or standards. Please engage independent auditors and/or legal advisors to ensure compliance of your product or service with local and/or international laws.

The Responsible AI Assessments do not attempt to be a ‘holy grail’.
They simply strive to make the opaque field of AI ethics more operationalized and tangible and to provide exemplary guidance for AI stakeholders on how to incorporate considerations of AI Ethics throughout the algorithm lifecycle.
 
 

 

About the team behind the Responsible AI Assessments 

GIZ’s “FAIR Forward – Artificial Intelligence for All initiative strives for a more open, inclusive and sustainable approach to AI on an international level. To achieve this, the project works together with seven partner countries: Ghana, India, Indonesia, Kenya, Rwanda, South Africa, and Uganda in pursuit of three main goals:

  1. Facilitating access to training data and AI technologies for local innovation;
  2. Strengthening technical know-how on AI at a local level; and
  3. Supporting the development of ethical AI policy frameworks.

FAIR Forward is commissioned by the German Federal Ministry for Economic Cooperation and Development (BMZ).

 
Eticas is the world’s first algorithmic socio-technical audits company, founded in 2012 with the purpose of protecting people and their rights in technological and AI processes. In 2021, it published its methodology for auditing algorithms as part of its mission to create better technology for a better world, consolidating itself as a world reference. It participates in and leads various initiatives to raise awareness of the need to monitor and demand transparency in the use of AI and automated decisions.
 
Community of AI experts from FAIR Forward partner countries supported developing the method with their unique perspectives and insights. Through their AI expertise, grounded in contextual country-specific knowledge, they enriched developing and testing the Responsible AI Assessments with their critical questions and valuable perspectives. This combination also helped to uncover biases and harms that would otherwise not have been accounted for via existing frameworks.