Skip to main content

Responsible AI Assessments

The Responsible AI Assessments are a methodology that guides to design and implement more responsible AI systems, building on the globally-accepted framework “Recommendation on the Ethics of AI” (UNESCO 2022). This step-by-step process for assessing AI systems is a collaborative co-creation of the global project FAIR Forward – Artificial Intelligence for AllEticas and a diverse community of AI inclusion experts from Sub-Saharan Africa and Asia Pacific. The framework is flexible and applicable in various sectors.

The assessment method was tested on seven AI activities in six countries, including landslide detection in Rwanda, cashew disease detection in Ghana, and chatbot usage in Kenya (more information below). It is accessible open-source to encourage wide use and further development.

 

The Responsible AI Assessments are accessible for download and use through this link. They consist of the following parts:

1  Step-by-Step Guide
Provides guidance on how to apply the Qualitative and Quantitative Assessment Guides, enriched with best practices and lessons learned.

Qualitative Guide
Provides critical questions for each stage of the AI lifecycle to assess societal implications, potential biases, fairness, and effects on diverse stakeholders.

Quantitative Guide
Focuses on quantitative methods and metrics for critical analysis of data as well as AI models and systems.

1

What are Responsible AI Assessments?

The Responsible AI Assessments guide AI stakeholders (e.g. assessors, developers or deployers of AI) to identify, assess and mitigate potential harms and biases in AI, emphasizing human rights and ethical considerations throughout the AI lifecycle.

 

Why is such an assessment necessary? 
Imagine you are buying a car. You would want to know it is safe. The same standards should be applied for AI. AI can detect diseases, impact whether we get a loan or predict when farmers should plant seeds. It has tangible impact on people and the environment. If AI goes wrong, it might not detect diseases, deny a loan or endanger a harvest.

As GIZ, it is our responsibility to ensure that such unintended consequences do not endanger the sustainable development that we aim to promote. The Responsible AI Assessments are a tool for this.

 

Assessing AI systems is an essential, but demanding process that requires technical expertise. Assessments of this sort can follow a certain protocol of systematically checking different aspects of an AI system. The Responsible AI assessments provide detailed guidance on this process designed for use in the development cooperation context.

2

How

 

As of now, the Responsible AI Assessments follow the following steps for each assessment:

    1. An initial assessment (scoping phase) identifies possible ethical risks and creates a focus on project-specific topics.
       
    2. The individually compiled questions and topics are discussed in an in-depth analysis (deep dive) with the relevant project stakeholders and ideally also local experts.
    3. The AI Assessment can be customised to the respective AI application. Workshops with participating partner organisations, users and experts examine risks and formulate suitable risk mitigation measures
    4. Finally, the results of the analysis are documented, responsibilities for implementation are assigned and shared.