Scoping

Project phase

Identify which phase you are currently in.

Scoping

Digital tools and solutions used

Identify which phase you are currently in.

Scoping

Digital activities

Identify which phase you are currently in.

Data-related activities

Responsible data handling: Risk assessments conducted?

This section concerns the human rights risks around the collection, storage and processing of personal data.

Data-related activities

Data responsibility: Who is in charge of data processing?

This section concerns the human rights risks around the collection, storage and processing of personal data.

Data-related activities

Privacy: Does KfW process data that can be linked to a person?

This section concerns the human rights risks around the collection, storage and processing of personal data.

Data-related activities

Privacy acc. to GDPR: Personal data of KfW staff and associates?

This section concerns the human rights risks around the collection, storage and processing of personal data.

Data-related activities

Privacy acc. to GDPR: Is there a legitimate interest to award compensations?

This section concerns the human rights risks around the collection, storage and processing of personal data.

Data-related activities

Privacy acc. to GDPR: Do consent declarations exist?

This section concerns the human rights risks around the collection, storage and processing of personal data.

Data-related activities

Privacy acc. to GDPR: Do contractual arrangements exist?

This section concerns the human rights risks around the collection, storage and processing of personal data.

Data-related activities

Privacy acc. to GDPR: Is the personal data processing explicitly required by law?

This section concerns the human rights risks around the collection, storage and processing of personal data.

Data-related activities

Privacy acc. to GDPR: Is KfW obliged by public interest to process personal data?

This section concerns the human rights risks around the collection, storage and processing of personal data.

Data-related activities

Privacy acc. to GDPR: Is the data processing relevant for the establishment, exercise or defense of legal claims?

This section concerns the human rights risks around the collection, storage and processing of personal data.

Data-related activities

This section concerns the human rights risks around the collection, storage and processing of personal data.

Data-related activities

This section concerns the human rights risks around the collection, storage and processing of personal data.

Data-related activities

Privacy acc. to GDPR: Is anonymized data collection possible?

This section concerns the human rights risks around the collection, storage and processing of personal data.

Data-related activities

This section concerns the human rights risks around the collection, storage and processing of personal data.

Data-related activities

Privacy acc. to GDPR: Is the anonymized processing of data after collection possible?

This section concerns the human rights risks around the collection, storage and processing of personal data.

Data-related activities

This section concerns the human rights risks around the collection, storage and processing of personal data.

Data-related activities

Privacy: Does KfW receive personal data?

This section concerns the human rights risks around the collection, storage and processing of personal data.

Data-related activities

Privacy at country level: Are sensitive personal data being processed?

This section concerns the human rights risks around the collection, storage and processing of personal data.

Data-related activities

Privacy at country level: Informed consent?

This section concerns the human rights risks around the collection, storage and processing of personal data.

Data-related activities

Privacy at country level: Data minimization possible?

This section concerns the human rights risks around the collection, storage and processing of personal data.

Data-related activities

Privacy at country level: Data anonymization reversible?

This section concerns the human rights risks around the collection, storage and processing of personal data.

Data-related activities

IT-Security: Have you considered the security of the data?

This section concerns the human rights risks around the collection, storage and processing of personal data.

Artificial Intelligence, automated decision-making and machine learning

AI application/s

The section includes questions around human rights risks related to Artificial Intelligence, automated decision-making and machine learning (AI for short). 

Artificial Intelligence, automated decision-making and machine learning

Bias in datasets

The section includes questions around human rights risks related to Artificial Intelligence, automated decision-making and machine learning (AI for short). 

Artificial Intelligence, automated decision-making and machine learning

Biased outcomes

The section includes questions around human rights risks related to Artificial Intelligence, automated decision-making and machine learning (AI for short). 

Artificial Intelligence, automated decision-making and machine learning

Gender-based analysis

The section includes questions around human rights risks related to Artificial Intelligence, automated decision-making and machine learning (AI for short). 

Artificial Intelligence, automated decision-making and machine learning

Post-deployment testing

The section includes questions around human rights risks related to Artificial Intelligence, automated decision-making and machine learning (AI for short). 

Risk analysis

Risk profile of project

This section will help identify the risk context of the digital component, solution/s or tool/s. 

Design with people

Design with people

Consider whether the Principle for Digital Development “Design with people” (formerly named “Design with the User”) has been (or will be) applied when designing the digital solution/s or tool/s. 

Disadvantaged and marginalized groups

Disadvantaged/ Marginalized groups

Consider whether the project or solution impacts disadvantaged people and vulnerable/marginalised groups, who are particularly susceptible to human rights risks. 

Disadvantaged and marginalized groups

Potentially impacted rightsholders

Consider whether the project or solution impacts disadvantaged people and vulnerable/marginalised groups, who are particularly susceptible to human rights risks. 

Accessibility

Users as individuals

Consider how accessible the digital solution/s or tool/s are/is to its users.

Accessibility

Accessibility

Consider how accessible the digital solution/s or tool/s are/is to its users.

Accessibility

Monitoring

Consider how accessible the digital solution/s or tool/s are/is to its users.

Stakeholder engagement

Engagement on potential impacts

This section concerns the engagement of stakeholders on the potential impacts of the project and its solution/s or tool/s and whether their concerns are addressed. 

Stakeholder engagement

Engagement with rights holders

This section concerns the engagement of stakeholders on the potential impacts of the project and its solution/s or tool/s and whether their concerns are addressed. 

Stakeholder engagement

Engagement with rights holders

This section concerns the engagement of stakeholders on the potential impacts of the project and its solution/s or tool/s and whether their concerns are addressed. 

Stakeholder engagement

Engagement with rights holders

This section concerns the engagement of stakeholders on the potential impacts of the project and its solution/s or tool/s and whether their concerns are addressed. 

Transparency and reporting

Reporting

Transparency and reporting

Reporting

Access to remedy

Remedy

Progress

0%

Which phase of the project are you in?

Select 'Project Preparation' or 'Implementation' if you are using this Check to focus on a relationship to a financing recipient / Project Executing Agency (PEA) or consultant developing or making use of digital solutions or tools.
Select 'planned use' or 'currently using' if you are focusing on digital tools or solutions used by KfW staff or their associates. Select an option to see the commentary on how the Digital Rights Check can be used for that particular option.

Comment - Project Preparation: Screening, Feasibility and Due Diligence

If you are using this Check during the project preparation phase in relation to a Project Executing Agency (PEA), consultant or another company developing digital solution or tools, you may use the Check as an early guide on what human rights risks the solution / tool developer might – generally speaking – be exposed to, and where the opportunities for positive impact might be. This information could be an important input to the project’s Target Groups & Impacted People Analysis including its Gender Analysis, among other assessments, depending on the project type and context. Of course, this Check does not replace an environmental and social due diligence nor a human rights due diligence. Its aim is rather to offer additional insight into the typical risks of digitalization.

Comment - Implementation and / or (Remote) Monitoring by Project Executing Agency (PEA) / Consultant

If the project is currently in the implementation / monitoring phase with regard to a partner organization (e.g. a Project Executing Agency (PEA), consultant or supplier) that develops digital solutions/tools, you may use the Check to identify risks related to those solutions / tools and get ideas for the question what the relevant risk  mitigation measures might be. Of course, this Check does not replace an environmental and social due diligence nor a human rights due diligence. Its aim is rather to offer additional insight into the typical risks of digitalization.

Comment - Planned use of digital solutions / tools by KfW staff

If you are currently planning to use solutions/tools directly as KfW staff or associate, you may use the Check as an early guide and inspiration on how to avoid or mitigate human rights risks related to the solutions/tools in question. Of course, this Check does not replace an environmental and social due diligence nor a human rights due diligence. Its aim is rather to offer additional insight into the typical risks of digitalization.

Comment - Currently using digital solutions / tools by KfW staff (e.g. for Remote Verification)

If you are currently using digital solutions/tools directly as KfW staff or associate, you may use the Check to identify potential human rights impacts related to the solutions/tools and inform decision-making around next steps for addressing those impacts. Of course, this Check does not replace an environmental and social due diligence nor a human rights due diligence. Its aim is rather to offer additional insight into the typical risks of digitalization.

Comment

You may use this Check in other phases or institutional set-ups and the text you entered will be shown in the results page for your own further use.
Please note, however, that this answer will not be saved by the website and does not change the recommendations of the Digital Rights Check.

In case you are planning or engaging in Remote Management, Monitoring or Verification using digital tools, you may check out KfW’s RMMV Guidebook containing useful information and best practices for each step in the project cycle, as well as descriptions of RMMV tool types and their relative pros and cons: Remote Management, Monitoring, and Verification Guidebook | KfW Development Bank (kfw-entwicklungsbank.de)

You may also have a look at the well-known Principles for Digital Development.

They serve as a compass for those working to promote sustainable and inclusive development in today’s complex digital landscape. Using these Principles as a starting point, policymakers, practitioners, and technologists will be better equipped to ensure that all people can benefit from digital initiatives and from the broader digital society.

Originally developed in 2014, the Principles are officially endorsed by more than 300 organizations, including donors, international organizations, and civil society organizations. During the first decade (2014-2024), they widely influenced funder procurement policies and the design and implementation of development programs. In 2024, the Principles were updated in consultation with a diverse set of individuals and organizations.

When engaging and assessing human rights risks arising from technology company business models, you may read the UN OHCHR ‘Human rights Risks in Tech’ introduction into aspects like, for example, algorithm-supported decision making or high-risk customers or end-users.

Case example – project preparation phase 

As an example, KfW would be considering providing a loan to an organization that develops FinTech solutions or providing a grant to a ministry of health to finance a reproductive health voucher scheme for women. If the solution is targeted at individual persons, it is important already at this early stage to consider what the broader human rights risks might be with the project so that any preventive measures can be taken. In the case of this example, it may be expedient to conduct a gender and human rights risk analysis in view of potential risks stemming from digital solutions. For this purpose, this Digital Rights Check gives you an idea of typical risks and offers a variety of general recommendations.

Case example – implementation & (remote) monitoring phase 

If KfW, instead, would already have provided a loan to an organization that develops FinTech solutions, and if the FinTech solution may have impacts on individuals, through e.g. automated credit risk scoring, it would be important to apply a broader human rights lens to the project in order to take preventive measures to address potential future impacts, or to remedy any negative impacts that may have already occurred (such as impacts on right to equality and non-discrimination or the right to an adequate standard of living). 

Case example – ‘Planned use of digital tools by KfW staff’ or ‘Currently using digital tools by KfW staff’ 

It may well be that KfW directly procures digital solutions or tools for conducting a feasibility study (before entering into a financing agreement for a project) or a remote progress review. For example, KfW may contract a consultant to use mobile data collection to collect baseline information, needs, ideas and feedback from local stakeholders and/or KfW staff would collect and store photos and video footage to verify the use of funds of a construction site. In these cases, the correct option would be to select either ‘Planned use of tools/solutions’ or ‘Currently using digital tools/solutions by KfW staff’, and to use this Digital Rights Check to assess what the human rights risks involved in using these tools/solutions might be and how to mitigate them. 

 

Which of the following digital solutions or tools are being developed or used?

Select all applicable options. For more information about the response options, select the relevant option and see the comment.

Comment - Mobile Data Collection / Crowdsourcing, Smartphone Apps

Smartphone applications may be used for many different purposes, including for e-health programmes, fintech platforms, Covid-19 track-and-trace programmes, e-governance solutions, digital complaint / grievances mechanisms and many more. Depending on the purposes of the app, it may include significant data collection and processing activities as well as crowdsourcing and artificial intelligence models applied to the data. 

Comment - eLearning / Collaboration Tools

E-learning or other forms of virtual and digital learning and collaboration tools, though rarely high-risk tools, may often include some form of data collection. This could include collection of personal information, if there is gated access to the platform, and could also include collection of other data linked to specific responses, response times etc. Data from participants and users may also be analyzed and processed in different ways. 

Comment - Drones, Satellites, Geo-Information Systems (GIS)

Drones, satellites, Geographic Information Systems (GIS) and other geospatial tools may facilitate real-time collection, sharing and analysis of data on the ground. If multiple of these  tools and devices are connected to each other, this may include complex analysis of data from various sources. Usually, they are not collecting personal data, but if the resolution of the imagery is very high and data is collected over time or using a combination of tools, there may be a risk of unwillingly tracking local household patterns.

Comment - Digital Communications Platform (incl. Social Media), MIS

Digital communication platforms and Management Information Systems (MIS) usually include a lot of data processing of users or staff data. Platform / database users data are being collected, stored and shared, and the use of a platform / database can also facilitate large-scale analysis of the data by applying artificial intelligence models, machine learning methodologies and so forth. Finally, a digital communication platform also makes use of user-generated data. Examples of use of digital communication platforms frequently include use of social media for civil society engagement and e-health platforms, among others. 

Comment - Building Information Modeling (BIM), Sensors, Internet of Things (IoT)

The use of BIM, sensors or IoT devices naturally include different forms of data processing, including data collection and data storage. This means that particular attention must be paid to the data protection regulations and standards in the relevant context. Minimization and anonymisation of personal data needs to be considered when designing the use of such devices in order to  avoid accidentally capturing and processing such data.

Comment - ePayment / Fintech Solutions

FinTech, or financial technology, solutions are digital solutions generally aimed at disrupting the financial industry by making use of digital technologies. This can include epayments / mobile money transfers, credit risk scoring algorithms, chatbots to ‘triage’ customers and much more. ePayment tools and FinTech solutions often collect or otherwise process a lot of personal data, and considering that access to finance might be in the balance it is also important to consider e.g. discrimination and exclusion risks, e.g. faced by women, people with disabilities or residents of rural areas.

Comment - Digital ID / Authentication / Biometrics

According to UN Economic Commission to Africa, a digital identity (ID) “is a means of identifying or authenticating the identity of an individual both online and offline. Digital identity can be created from information found on a government-issued legal ID and be used to accurately recognize an individual in order to provide them with their rights or entitlements. A digital identity can also be created to provide an individual with access to digitized commercial services, such as e-commerce, e-government, digital banking etc.”  

Digital IDs typically require the collection and handling of vast amounts of sensitive data (incl. biometrical information like iris scans, voice or finger prints), with risks to privacy involved. There are also many examples where the rollout of a digital ID have left marginalized groups without access to public services, leading to a range of human rights impacts. 

Comment - Data Analytics, Data Mining

Data can be analysed with a variety of tools to find patterns, validate hypotheses or visualise graphs, e.g. in dashboards. Potentially, the provider of the data analytics software needs the dataset to process the data, therefore it is important to understand if the data is stored, anonymised or deleted after the analysis. 

Comment - Artificial Intelligence

Artificial intelligence includes anything from simple automated prioritization tools to complex data-driven automated analysis solutions of public utilities (e.g. water use). The tools/solutions themselves typically require data that can then be further processed. As such, there could often be issues around collecting, sharing and otherwise processing data, as well as specific issues concerning the application of the artificial intelligence itself. 

Comment - Blockchain, Distributed Ledger Technologies (incl. TruBudget)

Distributed Ledger Technologies (DLT) like blockchain may often be considered useful tools to process distributed data in a secure way. In particular, they are said to guarantee that data that was entered cannot be deleted or manipulated later (immutability). However, this might have implications if data is stored that is required to be deleted after some time (e.g. due to regulations), which is not easily possible in a distributed ledger. In addition, the storage for the access to the blockchain (e.g. the private keys to insert data) needs to be set up carefully. 

Comment - Low Code or No Code Platforms

Low or no code platforms enable fast generation of digital tools with little or no programming skills. For example, mobile apps can be generated using a graphical user interface (GUI) within short time frames. However, it is important to understand that the data collected and processed by such applications is often saved within the perimeter of the low/no code platform provider.  

 

Comment - Integration to Internet Services via APIs

When using Application Programming Interfaces (API) to integrate internal data repositories to the internet, you need to identify if / which personal data is being transferred, if this can be avoided (data minimisation), and if not, how to protect this data. Also, you need to ensure that this data will be anonymised or deleted according to the data privacy regulations that do apply to your case.

Comment - Cloud Services

The use of cloud services will naturally include different forms of data processing, including data collection and data storage. This means that particular attention must be paid to the data protection regulations and standards in the relevant context. Also, you need to inform the respective rights holders about a risk that entities from third-party countries (location of the servers) may be able to access the data stored in the cloud.

Resources containing recommendations on how to prevent or migitate human rights risks regarding technical/digital tools and solutions in international development:

General resources on addressing human rights risks in German (financial) development cooperation:

Digital tools and solutions used by KfW 

KfW finances many projects in which digital for development (D4D) tools and solutions play an important role. These range from e-learning programmes in Kenya, through wildlife-monitoring through sensors to crowdsourcing citizen feedback.

See more examples on digitalisation and innovation financed by KfW here.

See also the Toolkit 2.0 Digitalisation in Development Cooperation (GIZ)

Does the project or solution(s)/tool(s) include any of the following data processing activities?

By 'data processing activities’ we mean the underlying activities of the relevant tools or solutions. For more information about the response options, select the respective option and see the related comment.

Comment - Data collection

This option implies that the project component somehow is developed to collect data digitally. This may include projects focused on public participation where information of individuals’ opinions related to government practices is linked to identifying information of the individual. 

Collecting data may cause a series of impacts, the most obvious being the right to privacy, if there e.g. was no consent from the data subjects. However, the ‘threat’ of collecting data without consent may also have impacts on e.g. the freedom of expression since individuals do not want to share their opinion if that data is being collected and could be traced back to them. 

Comment - Data storage

Data storage is naturally linked to data collection, since it relates to what happens to the data after it has been collected. This includes e.g. having a database of all learners that have participated in an e-learning project. It may in some cases be possible that data collected simply is not stored, or that it is stored elsewhere and by someone else.  

Data storage in itself may primarily impact the right to privacy, particularly if there are risks of data breaches. Even when there are legitimate reasons to store data, storing that data for an excessive period of time may raise significant right to privacy concerns, especially in fragile contexts or in authoritarian state systems.

Comment - Data alteration, treatment or use

This activity relates to how the data is used. While it is possible that nothing is done with the data except for storage, often times digital data collection has the purpose of using the data to e.g. analyze patterns, make predictions, and so forth. One example is the analysis of large amounts of data to forecast disease outbreaks in a country. 

Data alteration, treatment or use may have far-reaching impacts on human rights depending on the context. It may include using big data analytics to make predictions that end up being discriminatory. Another example: If health centers use data to improve efficiency, small errors could have severe impacts on the right to health. 

Comment - Data sharing

Data sharing simply means to share data that has been collected with others. This often occurs in relation to research where data used for research is made available to other researchers. 

First and foremost, data sharing may have impacts on the right to privacy, if the individual did not provide her/his informed consent. Further, the data shared might be used for purposes such as targeted advertising, which may among other things have discriminatory impacts. 

Comment - Hosting and/or sharing user-generated content

This relates to having a platform that is made available to a certain users and where these users generate the content. This may include e-learning platforms where learners develop certain digital content, and could also concern crowdsourcing platforms for e-participation, grievances and redress or monitoring where users provide their preferences, observations, recommendations or feedback. Managing user-generated content may cause a series of impacts, the most obvious being the right to privacy, if there e.g. was no consent from the users. The ‘threat’ of collecting data without consent may have additional impacts, e.g. on the freedom of expression, e.g. if individuals don’t dare to share their grievances if they cannot provide them anonymously. 

Simply hosting or sharing content may also have impacts. If hate speech is hosted on a platform, this may have negative impacts on the right to mental health. If certain discriminatory content is shared, that naturally may have impacts on the freedom from discrimination. 

If data is user-generated, the users might need control over the deletion of data, if they remove their account or decide that their data shall be deleted.  

Comment - Artificial Intelligence (AI) / machine learning

While there is no shared definition of AI, it generally concerns the science and engineering of making intelligent machines, especially intelligent computer programs. It includes the likes of voice and face recognition, self-driving cars as well as machine learning algorithms that can help predict weather patterns, droughts or even criminal activity.

Artificial intelligence may have many use-cases and can therefore have a large variety of impacts. This ranges from: impacts on the right to a fair trial when AI is used in judicial systems; impacts on the right to equality and freedom if outcomes are biased to the detriment of marginalized groups; impacts on the right to health if AI-supported e-medicine platforms make sub-par decisions; impacts on the right to an adequate standard of living if AI-supported fast-track approvals for unemployment benefits are limiting the access to such benefits for certain groups; impacts on the right to privacy and many other rights in relation to COVID-19 track and tracing apps. For AI technologies, it is important to address common pitfalls: 

  • Could there be (discriminatory) bias in the data that was used for training the AI? 
  • Is the decision making process of the AI transparent and traceable? 
  • What are the risks of incorrect answers? Is AI used to fully automate processes or to assist? 

Please note, that by ticking this box, the Digital Rights Check will include additional questions related to AI-specific risks.

Recommendation - I don't know

If you are not entirely sure which of the above mentioned options do apply to your project, it is recommended to select all relevant options for your project to make the most of the Digital Rights Check’s guidance.

In case you don’t know at all which data processing options may be used in your project, please (re-)assess the (planned) digital tools/solutions of your project and then conduct the Digital Rights Check again.

Comment - I don't know

It is important for project staff to understand the underlying technology of digital components, in order to assess their impacts. The Digital Rights Check will still let you proceed, but without specific questions related to the digital activities of your digital tools/solutions. 

Explanations:

Further resources:

For example, a hypothetical project focusing on improved irrigation in an African country has received financing by KfW. The project works together with local small-holder farmers to improve their yield while also using water resources more effectively. Due to the Covid-19 pandemic, some of the engagement with community leaders has taken place over digital platforms. The digital platform used is open to access to anyone who would like to join these engagement sessions and many community leaders join. The use of the digital platform included the following digital activities: 

  • Data collection – users must fill in their name; data is collected on when a user logs on, logs off and what they write on the platform 
  • Data sharing – the data is shared between all partners of the project, in order to track the success of the engagement 
  • Data storage – the above data is stored, in order to track success over time 
  • Hosting and sharing user-generated content – the comments by the participants is user-generated content 

The following activities were not relevant: 

  • Data alteration, treatment or use – the data has only be logged and shared, but no further treatment of the data occurs on the platform 
  • Artificial intelligence (AI) / machine learning – not relevant in this case, since no automated processing of the data took place. 

 

Caution - No data processing

Given that your answer indicates that none of the above activities are taking place, the Digital Rights Check  ends here. This may for example be the case if your project is about digital technologies, but it does not itself use such technologies (e.g. face-to-face education on digital literacy / eSkills). Feel free to check the “other” box in case you would like to continue through the other steps of the Digital Rights Check.

Has/have the digital solution/s or tool/s been assessed with a particular focus on responsible data use and data protection risks?

In general, this could be true in case that the EU General Data Protection Regulation (GDPR) requirements have been applied, or in case that your organization has got any specific and contextualized 'responsible data guidelines' that are meant to address this. If you are unsure of the answer, please consult with colleagues or Project Executing Agency’s (PEAs) or consultants / other contractors’ staff who might have been involved in such a process.

Please note, that firstly, any data is subject to local law! If partner countries have privacy laws, these need to be respected by the project.

Recommendation - Yes

Review and update of the initial assessment on responsible data use and data protection risks, as needed, and move on to pay particular attention on stakeholder engagement and marginalized groups.
Check if the (updated) risk analysis is still in force, being implemented, monitored and reported upon.

Potentially impacted human rights and principles - No

Right to Privacy

Recommendation - No

Ensure that the solution(s) / tool(s) is/are assessed from a responsible data perspective, as soon as possible.

Responsible data handling means applying ethical principles of transparency, fairness and respect to how we treat the data that affects people’s lives. Simply put, responsible data handling means going beyond “can we do this?” and asking “should we do this?” Guiding principles are transparency (no unwelcome surprises), fairness (impact on users) and respect for the individual.

Such an assessment has to include a data protection risk analysis containig recommendations for mitigation and remedy measures as well as a montoring and reporting plan.

Once this assessment was conducted, you need to check whether its risk analysis is in force and its recommendations are implemented, monitored and reported upon.

Potentially impacted human rights and principles - I don't know

Right to Privacy

Recommendation - I don't know

You need to consult with the Project Executing Agency (PEA), consultant, software contractor or others involved with the development or use of the digital solution(s) or tool(s) and see whether an assessment on responsible data use and data protection risks has taken place and whether the resulting risk analysis is in force and is being respected. If this was not the case, such an assessment needs to be conducted. The resulting risk analysis recommendations need to be implemented, monitored and reported upon.

Assessing a Digital Health Platform for Responsible Data Use and Data Protection Risks

In the realm of digital health, the protection of sensitive patient data is paramount. This fictitious case study illustrates the assessment of a digital health platform, “MediCare Connect” (fake name)  which facilitates remote consultations and health record management. The primary focus is on evaluating the platform’s responsible data use and data protection practices. MediCare Connect was developed to enhance patient access to healthcare services through telemedicine and centralized health records. The platform’s key features include video consultations, electronic health records (EHR) storage, and integration with wearable health devices for real-time monitoring.

The assessment of responsible data use should consider the fundamental principles of data privacy:

  • Notice—persons should be given notice when their data is being processed, for instance before contacting an interview partner or conducting a (online) focus group discussion.
  • Purpose—data should only be used for the predefined purpose stated and not for any other purposes. For example, only essential data required for providing healthcare services should be collected. Wherever possible, anonymization techniques should be used.
  • Disclosure—persons should be informed as to who is collecting their data. Therefore, it is highly recommended to establish strict policies for third-party data sharing, requiring platform partners to adhere to the same data protection standards. Data sharing agreements should be in place, outlining the responsibilities and obligations of third parties, including informing the platform users.
  • Consent—data should not be disclosed without the data subject’s consent (if no specific legal obligation exists). That is, platform users should be required to provide explicit consent before data collection, with clear, understandable consent forms (and other other formats). It is advisable that the platform would offer detailed information on how data will be used, stored, and shared, ensuring transparency.
  • Security—collected data should be kept secure from any potential abuses. It is recommended to use robust encryption methods for data at rest and in transit; possibly Multi-factor authentication (MFA) for both patients and healthcare providers to access the system should be required. Regular security audits and vulnerability assessments should be conducted to identify and mitigate potential risks.
  • Access—data subjects should be allowed to have access to their data and be able to ensure that corrections to any inaccurate data are made. In general, users should be allowed to access, correct, and delete their personal data easily. Supporting data portability would enable users to transfer their data to other platforms or service providers.
  • Accountability—persons should have a method available to them to hold data collectors accountable for not following the above fundamental principles (this is possible by enabling principle 6 (Access) and additionally by providing them with the contact information for a complaints hotline or another complaints mechanism accessible to them). In this case, the complaints hotline as well as the legal notice should be easily accessible from the platform through multiple channels that are also accessible for persons with disabilities.

Who is in charge of data processing?

Who is responsible for managing the data source? Who is the controller of the data?

Recommendation - Partner Country Institutions, i.e. Project Executing Agency (PEA)

If Partner Country Institutions, i.e. Project Executing Agency (PEA) control the data sources, you need to check if the solution(s) and / or tool(s) design and implementation respect national privacy laws and regulations as well as the fundamental principles of data privacy (notice, purpose, disclosure, consent, security, access and accountability).

Recommendation - KfW (GDPR, EU Tender Law, etc. do apply)

If KfW and / or its associates control data sources from Germany for their Remote Verification purpose, the GDPR and the Bundesdatenschutzgesetz (BDSG) must be respected. Any direction from Germany is controlled by these two laws, at least. Data sources that include personal information must always be managed and protected in line with the principles of the GDPR, e.g., purpose, legal basis, data minimization, accuracy, and storage limitation.

An overview of data privacy legislation can be found on the website
https://www.dlapiperdataprotection.com/index.html

The Digital Economy for Africa Initiative: Country Diagnostics (Version June 2023)

Complete Guide to GDPR Compliance: General Data Protection Regulation (GDPR) Compliance Guidelines

GDPR checklist for data controllers

GDPR legal text

BDSG legal text

 

If the partner country is in charge of data processing, the Recipient, Project Executing Agency (PEA) and / or their implementation consultant should ensure that its own data protection procedures comply with the national / local law or in the case of the United Nations acting as PEA, with its own data protection policy.

In addition, it is advisable to check the following fundamental principles of data privacy (see also the RMMV Guidebook published by KfW):

  • Notice—persons should be given notice when their data is being processed, for instance before contacting an interview partner or conducting a focus group discussion.
  • Purpose—data should only be used for the predefined purpose stated and not for any other purposes.
  • Disclosure—persons should be informed as to who is collecting their data.
  • Consent—data should not be disclosed without the data subject’s consent (if no specific legal obligation exists).
  • Security—collected data should be kept secure from any potential abuses.
  • Access—data subjects should be allowed to have access to their data and be able to ensure that corrections to any inaccurate data are made.
  • Accountability—persons should have a method available to them to hold data collectors accountable for not following the above fundamental principles (this is possible by enabling principle 6 (Access) and additionally by providing them with the contact information for a complaints hotline or another complaints mechanism accessible to them).

These fundamental principles of data privacy would apply in all situations of processing personal data and data that can be personalized. However, in the following situations there would most certainly be a heightened risk for data privacy:

  • Data privacy is a major concern in situations of limited freedom of opinion / speech / press, particularly in contexts of fragility and conflict, as well as in countries with authoritarian regimes.
  • It should be noted that the processing of sensitive private data as part of the project also creates increased risks for project-affected people.

In a partner country with a rather negative human rights record, for example, the installation of digital cameras at public places together with face recognition functions could pose a threat to individuals passing by, because the data could be misused by the security authorities.

Each of these two risks respectively make the strict implementation of data privacy by the PEA / consultant within projects that imply the processing of personal data or data that can be linked to individual persons an indispensible prerequisite to protect human rights.

In cases where KfW would directly contract services and therefore be in charge of data processing, the European GDPR does apply (see resources).

Does KfW process data that can be linked to a person?

If KfW and / or its associates control data sources from Germany for their Remote Verification purpose, the GDPR and the BDSG must be respected. Any direction from Germany is controlled by these two laws, at least. Data sources that include personal information must always be managed and protected in line with the principle of the GDPR, e.g., purpose, legal basis, data minimization, accuracy, and storage limitation.

 

 

In case that a DFI controls data sources from Germany, it should comply with the General Data Protection Regulation (GDPR) and the Bundesdatenschutzgesetz (BDSG).

Definition of data processing according to Art. 4 No. 2 of the GDPR: Processing includes collection, recording, organization, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure, or destruction.

If it is not possible to minimize or anonymize all personal data before data processing or to anonymize all personal data during each data processing step, the following guiding questions help determine if the personal data can be processed or not (see RMMV Guidebook 2.3.1):

1. Is personal data of KfW staff and associates (i) collected or (ii) processed?

2. Is personal data processed for the legitimate interest to award compensation (e.g., as required by local laws)?

3. Do consent declarations of individuals concerned for RMMV purposes exist (potential impact of local laws!)?

4. Are there any contractual arrangements regarding the processing of personal data of individuals concerned for RMMV purposes (“warranted consent”—Please also observe local laws!)?

5. Is the processing of information by KfW explicitly required according to EU or EU Member State (German) law?

6. Is KfW obliged by public interest to process personal data?

7. May the RMMV information be relevant for the establishment, exercise, or defense of legal claims?

If any of the questions above can be answered with a Yes, the data can be processed while taking the necessary data protection and IT security precautions. If none of the following questions can be answered with a Yes, the data must be anonymized or destroyed.

For illustration, some obvious risks for personal data protection might be, among others:

  1. Unauthorized Access:
    • Scenario: Personal data collected from the partner country is accessed by unauthorized individuals due to weak security measures.
    • Impact: Breach of privacy, potential misuse of personal information, and loss of trust among stakeholders.
    • Mitigation: Implementing robust access controls, encryption, and regular security audits.
  2. Data Minimization:
    • Scenario: Excessive personal data is collected beyond what is necessary for the project.
    • Impact: Increased risk of data breaches and non-compliance with GDPR principles.
    • Mitigation: Ensuring data minimization by collecting only the data necessary for the project’s purposes.
  3. Data Accuracy:
    • Scenario: Inaccurate personal data is collected, leading to incorrect assessments and decisions.
    • Impact: Flawed monitoring, misguided project outcomes and potential harm to the affected population.
    • Mitigation: Regularly updating and verifying data to maintain accuracy.
  4. Storage Limitation:
    • Scenario: Personal data is retained longer than necessary.
    • Impact: Increased risk of data breaches and non-compliance with GDPR storage limitation principles.
    • Mitigation: Establishing clear data retention policies and regularly purging unnecessary data.

Are exclusively personal data of KfW staff and associates (i) collected or (ii) processed?

The collection of personal data of KfW staff and associates is allowed within the limits of their contracts and GDPR.

Recommendation - Yes

When KfW is processing the personal data of its staff, consultants or other associates, it has to follow GDPR and BDSG.

KfW staff is required to follow internal data processing guidelines.

Recommendation - I don't know

In case you are not sure if exclusively data of KfW staff and associates are being processed (i.e. no personal data from other rights holders), we recommend you to assume that this is not the case and therefore to answer “No”.

Are personal data (i) collected or (ii) processed for the legitimate interest to award compensations (e.g. as required by local laws)?

Persons whose land rights have been affected due to public infrastructure projects are to receive the correct compensation as required by national/local laws and World Bank standards.

Recommendation - Yes

KfW is required to verify, that persons whose rights (e. g. land rights, livelihood and living standards, relevant social and economic rights etc.) have been affected due to public infrastructure projects, receive the correct compensation as required by national/local laws and World Bank standards. In order to be able to verify if the initially identified rights holders received the correct compensation amounts, their personal data has to be stored by KfW following GDPR (see Art. 6 GDPR).

KfW staff is to follow internal data processing guidelines.

Recommendation - I don't know

In case you are not sure if there is a legitimate interest for the type of personal data to be processed, we recommend you to assume that this is not the case and therefore to answer “No”.

See Art. 6 GDPR –> f) legitimate interest

Under the General Data Protection Regulation (GDPR), processing personal data can be justified by a legitimate interest, among other legal bases.

The definition of data processing according to Art. 4 No. 2 of the GDPR is: ‘Processing includes collection, recording, organization, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure, or destruction.’

When it comes to processing data for awarding compensations, the exception of legitimate interest could apply.

Legitimate Interest is one of the six lawful bases for processing personal data under Article 6(1)(f) of the GDPR. It allows for processing if:

  1. The interest pursued is legitimate: There must be a genuine and lawful reason behind the processing.
  2. The processing is necessary: The data processing must be necessary to achieve the stated interest.
  3. Balancing test: The interests or fundamental rights and freedoms of the data subject must not override the legitimate interest of the data controller or a third party.

When considering compensation practices according to the World Bank standards, legitimate interests are often grounded in the broader goals of sustainable development, social responsibility, and compliance with international regulations. While the World Bank itself does not operate under a legal framework identical to the GDPR, it does emphasize principles of fairness, transparency, and accountability. Potential legitimate interests for awarding compensations according to the World Bank standards could be, for example:

1. Restoration of Livelihoods and Living Standards

  • Example: Providing financial compensation and relocation assistance to residents displaced by a dam construction project to restore their previous living standards or improve them.

2. Social and Economic Development

  • Example: Awarding compensation to local businesses impacted by infrastructure development to support economic stability and growth.

3. Equitable Resource Distribution

  • Example: Compensating farmers for loss of access to agricultural land due to a new road construction project that benefits the broader community.

4. Compliance with International Standards and Agreements

  • Example: Providing compensation in alignment with international labor standards to workers affected by project-related employment changes.

5. Promoting Social Equity and Inclusion

  • Example: Ensuring that women, minorities, and other vulnerable groups receive compensation for their specific needs and impacts in a development project.

6. Mitigating Adverse Impacts

  • Example: Awarding compensation for environmental damage caused by construction activities, such as loss of biodiversity or water pollution.

7. Strengthening Community Relations

  • Example: Implementing community development programs that include compensatory measures for disruptions caused by project activities.

8. Supporting Resettlement and Rehabilitation

  • Example: Providing housing compensation and job training programs for individuals relocated due to urban development projects.

9. Encouraging Participation and Consent

  • Example: Compensating community members for time and resources spent in consultations and participatory planning processes.

10. Risk Management and Conflict Prevention

  • Example: Implementing compensation strategies to address potential grievances before they escalate into conflicts.

Compare the standards ESS 1-10 of the World Bank Environmental and Social Framework (ESF) including the World Bank’s Operational Policies (OPs) and Bank Procedures (BPs).

Do consent declarations of individuals concerned for project purpose exist?

Before taking pictures or collecting other personal data of people, their written consent to its processing (and publishing) is required. The form of this written consent has to adhere to local privacy laws and regulations.

Recommendation - Yes

KfW staff and associates are required to follow national laws on consent declarations and internal data processing guidelines.

 

Recommendation - No

Local laws are to be observed!

Review your efforts to inform individuals of how their data is used, as well as the processes in place for them to provide consent. Engage with individuals to assess how these efforts can be improved.

If informed consent cannot be secured, cease the processing of personal data. 

Recommendation - I don't know

Check if consent declarations have been provided by the individuals whose data has been collected.

In some cases, it is however doubtful whether consent has been given freely, particularly if the option of not providing consent was not a realistic option in practice.  

Therefore, even in case consent declarations do exist, it is important to assess whether the consent provided has been free and informed. 

A frequent case is taking pictures of the projects that include project-affected people. Before taking their picture, their written consent to its processing (and publishing) is required. The form of this written consent has to adhere to local privacy regulations.

Are there any contractual arrangements regarding the processing of personal data of individuals concerned for project purpose ("warranted consents")?

Has data processing of project staff been covered in the contractual arrangements between the project stakeholders and if yes, to which extent? Of course, local laws are to be observed!

Recommendation - Yes

Contractual arrangements have to observe local laws. KfW staff is required to follow internal data processing guidelines.

Recommendation - I don't know

In case you are not sure if warranted consents do exist for the type of personal data being processed, we recommend you to assume that this is not the case and therefore to answer “No”.

This is the case, for example, if and as far as personal data processing of project staff has been covered in the contractual arrangements between KfW, the Recipient, the Project Executing Agency (PEA), the implementation consultant and / or other project stakeholders.

Is the processing of information by KfW explicitly required acc. to EU or EU Member State (German) law?

Anti-Money Laundering and Anti-Corruption laws, for example, may require the processing of personal data of project stakeholders.

Recommendation - Yes

KFW staff is to follow internal data processing guidelines.

Recommendation - I don't know

In case you are not sure if the personal data being processed is explicitly required acc. to EU or EU Member State (German) law, we recommend you to assume that this is not the case and therefore to answer “No”.

Anti-Money Laundering and Anti-Corruption laws, for example, may require KfW or other project partners or audit firms to process personal data of project partner representatives or other project stakeholders.

Is KfW obliged by public interest to process personal data?

Examples of public interest not already covered by the other cases mentioned above could be fight against terrorism or public health action in times of pandemic disaster.

Recommendation - Yes

KfW staff is to follow internal data processing guidelines.

Recommendation - I don't know

In case you are not sure if the personal data being processed is explicitly required by public interest, we recommend you to assume that this is not the case and therefore to answer “No”.

There may be a case of public interest not already covered by the other cases mentioned above. This could be a case of security services of partner countries asking for personal data in the name of public order (fight against terrorism, public health action in times of pandemic disaster etc.), for example.

May the project information be relevant for the establishment, exercise or defense of legal claims?

The processing of personal data may be necessary for the establishment, exercise or defense of legal claims by KfW, project partners or project-affected people.

Recommendation - Yes

KfW staff is to follow internal data processing guidelines.

Recommendation - I don't know

In case you are not sure if the personal data being processed is relevant for the estalishment, exercise or defense of legal claims, we recommend you to assume that this is not the case and therefore to answer “No”.

The processing of personal data may be necessary for the establishment, exercise or defense of legal claims by KfW, project partners or project-affected people.

An example is the investigation of a potential case of subsidy fraud concerning a KfW-financed loan. Another would be the proceedings of a complaint mechanism.

Is anonymized collection of data possible?

It is possible, for example, to separate the personal data of the interviewees during a representative household survey already during the data collection exercise, so that only anonymized data is entered into the survey data base?

Recommendation - Yes

Since you did not answer YES to any of the previous questions that would allow the processing of personal data according to the GDPR, you have to anonymize the personal data you are processing.

In completing this task, it is recommended to follow your organization’s data anonymization policy.

Comment - Yes

Risks related to the right to privacy have been significantly reduced. Continuously assess whether the amount and type of data collected and used should be changed, and whether resp. when data can be deleted. 

Potentially impacted human rights and principles - No

Right to privacy

Recommendation - No

Review your data processing practice and ensure that data is minimized, since this can significantly lower human rights risks. Minimizing data can include e.g. initially collecting limited amounts of data as well as developing sunset clauses that ensure that data is only kept for a limited period of time after which it is deleted. 

Potentially impacted human rights and principles - I don't know

Right to privacy

Recommendation - I don't know

Review your data processing practice and ensure that data is minimized, since this can significantly lower human rights risks. Minimizing data can include e.g. initially collecting limited amounts of data as well as developing sunset clauses that ensure that data is only kept for a limited period of time after which it is deleted. 

 

It is possible, for example, to separate the personal data of the interviewees during a representative household survey already during the data collection exercise, so that only anonymized data is entered into the survey data base?

Is the anonymized processing of data regarding individuals after collection possible?

Is it possible to run the collected personal data through an automated anonymisation routine which deletes all data that can be linked to individuals?

Potentially impacted human rights and principles - Yes

Right to privacy

Recommendation - Yes

Since you did not answer YES to any of the previous questions that would allow the processing of personal data according to the GDPR, you have to anonymize the personal data you have been processing.

In completing this task, it is recommended to follow your organization’s data anonymization policy.

Potentially impacted human rights and principles - No

Right to privacy

Recommendation - No

Since you did not answer YES to any of the previous questions that would allow the processing or anonymization of personal data according to the GDPR, you have to delete the personal data you have been processing.

In completing this task, it is recommended to follow your organization’s data erasure policy.

Potentially impacted human rights and principles - I don't know

Right to privacy

Recommendation - I don't know

Since you did not answer YES to any of the previous questions that would allow the processing or anonymization of personal data according to the GDPR, you have to conduct an assessment to find out and delete the personal data you have been processing that does not comply with the GDPR.

In completing this task, it is recommended to follow your organization’s data erasure policy.

Is it possible to run the collected personal data through an automated anonymisation routine which deletes all data that can be linked to individuals?

Does KfW receive personal data?

Obviously, if data sources are controlled from abroad by the PEA or a PEA-contracted consultant (based outside of the EU) and without any primary influence from Germany or the EU (this also includes individual associates of KfW operating locally), the local privacy requirements must be observed. It may be that KfW has contractually agreed with local PEA, suppliers, and consultants to introduce additional privacy safeguards like issuing privacy notices and collecting consent declarations. Local active persons must ensure that these measures are applied as agreed. Any non-compliance should be reported immediately to the relevant focal point at KfW.

As soon as the same data sources are accessed and processed by KfW (for Remote Verification purposes), the GDPR and the BDSG must be additionally observed. Please note that watching personal data purely on a screen may not be considered data processing if this data is not saved, for example, in the form of a screenshot.

Recommendation - Yes

In case that data sources are controlled from abroad by the Project-Executing Agency (PEA) or a PEA-contracted consultant (based outside of the EU) and without any primary influence from Germany or the EU (this also includes individual associates of KfW operating locally), the local privacy requirements must be observed. It may be that KfW has contractually agreed with local PEA, suppliers, and consultants to introduce additional privacy safeguards like issuing privacy notices and collecting consent declarations. Local active persons must ensure that these measures are applied as agreed. Any non-compliance should be reported immediately to the relevant focal point at KfW.

As soon as the same data sources are accessed and processed by KfW (for Remote Verification purposes), the GDPR and the BDSG must be additionally observed. Please note that watching personal data purely on a screen may not be considered data processing if this data is not saved, for example, in the form of a screenshot.

Recommendation - No, but Partner country institutions, i.e. Project Executing Agency (PEA)

Obviously, if data sources are controlled from abroad by the Project-Executing Agency (PEA) or a PEA-contracted consultant (based outside of the EU) and without any primary influence from Germany or the EU (this also includes individual associates of KfW operating locally), the local privacy requirements must be observed. It may be that KfW has contractually agreed with local PEA, suppliers, and consultants to introduce additional privacy safeguards like issuing privacy notices and collecting consent declarations. Local active persons must ensure that these measures are applied as agreed. Any non-compliance should be reported immediately to the relevant focal point at KfW.

Recommendation - No, but project consultants based outside of the EU contracted by the PEA

Obviously, if data sources are controlled from abroad by the Project-Executing Agency (PEA) or a PEA-contracted consultant (based outside of the EU) and without any primary influence from Germany or the EU (this also includes individual associates of KfW operating locally), the local privacy requirements must be observed. It may be that KfW has contractually agreed with local PEA, suppliers, and consultants to introduce additional privacy safeguards like issuing privacy notices and collecting consent declarations. Local active persons must ensure that these measures are applied as agreed. Any non-compliance should be reported immediately to the relevant focal point at KfW.

Recommendation - No, but project consultants based in the EU contracted by the PEA

If data sources are controlled from abroad by a Project-Executing Agency (PEA)-contracted consultant based in  the EU, both the local privacy requirements as well as GDPR and BDSG must be observed.

Please note that watching personal data purely on a screen may not be considered data processing if this data is not saved, for example, in the form of a screenshot.

Obviously, national privacy requirements must always be observed!

In case KfW or consultants based in the EU do receive personal data or data that can be personalized (e.g. for Remote Verification purposes), the GDPR and BDSG must be additionally observed:

Data protection principles according to the GDPR:

If you are processing  personal data or data that can be personalized, you have to follow the data protection principles as outlined in Article 5.1-2:

  1. Lawfulness, fairness and transparency — Processing must be lawful, fair, and transparent to the data subject.
  2. Purpose limitation — You must process data for the legitimate purposes specified explicitly to the data subject when you collected it.
  3. Data minimization — You should collect and process only as much data as absolutely necessary for the purposes specified.
  4. Accuracy — You must keep personal data accurate and up to date.
  5. Storage limitation — You may only store personally identifying data for as long as necessary for the specified purpose.
  6. Integrity and confidentiality — Processing must be done in such a way as to ensure appropriate security, integrity, and confidentiality (e.g. by using encryption).
  7. Accountability — The data controller is responsible for being able to demonstrate GDPR compliance with all of these principles.

Please note that watching personal data purely on a screen may not be considered data processing if this data is not saved, for example, in the form of a screenshot.

For example, consider the fictitious case of KfW partnering with an African government in wildlife protection:

To better monitor the progress of the project, the implementation of an Management Information System (MIS) was agreed. The Implementation Consultant based in the same country accesses this MIS to extract data for monitoring purposes and sends an aggregated monitoring report without any personal data to KfW. Only the national privacy law applies since the PEA is directing the MIS which contains personal data, while KfW is only receiving aggregated data.

As the project progresses, KfW staff conduct a Remote Progress Review for this project. They request direct access to the MIS to remotely verify the use of funds. If they only look at the MIS, do not make any screenshots that include personal data, and produce an aggregated report on the use of funds, no further action is necessary. If KfW staff need to process some personal data (e.g., pictures of the PEA staff from the MIS) for documentation to its client, these pictures must be pixelated according to the GDPR.

Does the digital solution/s or tool/s require collecting, storing or otherwise processing sensitive personal data by project partners?

Sensitive data can include: personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade-union membership, genetic data, biometric data processed solely to identify a human being, health-related data, data concerning a person’s sex life or sexual orientation. Note: Algorithms can be applied to generate sensitive personal data about people from non-sensitive data.

Potentially impacted human rights and principles - Yes

Right to privacy

Recommendation - Yes

The processing of sensitive data means that there is a strong need for a human rights-based approach to the project, taking point of departure in the potentially impacted individuals. Ensure that such a process is in place and revisit existing impact analyses. 

Comment - No

The fact that no sensitive data is collected, stored or otherwise processed implies a lower risk in terms of potential human rights impacts.

Potentially impacted human rights and principles - I don't know

Right to privacy

Recommendation - I don't know

Consult the further resources and engage with the Project-Executing Agency (PEA), consultant, software contractor and others involved in the development or use of the digital solution or tool, and ask whether sensitive data is processed. 

Sensitive data includes (according to the EU General Data Protection Regulation):

  • personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade-union membership
  • genetic data, biometric data processed solely to identify a human being
  • health-related data
  • data concerning a person’s sex life or sexual orientation.

Important note: Algorithms can be applied to generate sensitive personal data about people from non-sensitive data.

The case of the National e-health platform

This (fictitious) example of country X provides an insightful case study of how to develop and operate an e-health platform that respects and protects human rights, focusing on marginalized communities.

The country’s initiative aimed to enhance healthcare access across its population, including rural and underserved areas. For this purpose, the government launched a comprehensive e-health strategy as part of its Vision 2020 plan, focusing on digital health transformation.

A key component was the implementation of an integrated e-health platform to facilitate telehealth services, electronic medical records (EMRs), and mobile health applications.

A. To ensure that the e-health platform protected human rights, the country focused on several critical areas:

Privacy and Data Protection

  1. The country established legal frameworks for data protection to safeguard personal health information. The country enacted a Data Protection and Privacy Law, which outlines data handling procedures and ensures user consent.
  2. The platform employed advanced encryption standards and cybersecurity measures to protect sensitive data from unauthorized access and breaches.
  3. Patients were informed about data usage and had to provide consent before their data was collected or shared. This included the ability to opt out or withdraw consent at any time.

Non-Discrimination and Inclusivity

  1. The platform specifically targeted rural populations, women, and low-income individuals who historically faced healthcare access barriers.
  2. Services were offered in multiple languages, including regional languages, English, and French, to cater to diverse populations. The platform also accommodated people with disabilities by ensuring accessibility features.
  3. To combat financial barriers, the platform provided low-cost services. The government subsidized telehealth consultations and made them affordable for low-income citizens.

Right to Health

  1. The platform offered a range of services, including consultations, diagnostics, prescriptions, and referrals, ensuring holistic healthcare access (comprehensive health services).
  2. The platform underwent regular evaluations and updates based on user feedback to improve service delivery and address emerging health needs.
  3. Health professionals received training to utilize the platform effectively, ensuring high-quality care and user trust.

B. The new e-health platform incorporated several strategies to respect and protect users’ rights:

Stakeholder Engagement

  1. The government engaged with communities, local leaders, and civil society organizations to understand needs and concerns, ensuring the platform addressed real issues faced by marginalized groups.
  2. Collaborations with NGOs helped raise awareness about e-health services and educated communities about their rights and data protection.

Monitoring and Accountability

  1. An independent body was established to oversee the platform’s operations, ensuring compliance with human rights standards and addressing any grievances or violations.
  2. The platform underwent audits to assess compliance with data protection laws and human rights commitments. Findings were published to maintain transparency.

Feedback Mechanisms

  1. Users were enabled to provide feedback through hotlines, surveys, and community meetings. This input was crucial for making adjustments to the platform.
  2. A robust mechanism came into in place to address user complaints promptly, ensuring accountability and trust.

C. The country’s e-health platform demonstrated significant positive outcomes:

  • Telehealth services have expanded healthcare access to millions of people, particularly in remote areas.
  • The platform has contributed to better health outcomes, including reduced mortality rates and improved management of chronic diseases.
  • By focusing on inclusivity and affordability, the platform has empowered marginalized communities to exercise their right to health.

This inclusive approach to implementing its national e-health platform highlights how a comprehensive strategy can effectively address human rights risks and ensure that the platform will be inclusive, secure, and respectful of all users’ rights. This model can serve as a valuable reference for others seeking to implement similar initiatives.

Please note that this is a fictitious case for illustrative purposes. In a real-case scenario, there could most certainly be a number of other aspects not mentioned here, to be covered by the environmental, social and human rights due diligence.

Have all individuals been informed about the processing of their data, and have they provided their informed consent to the processing?

People often do not fully understand and feel in control over what and how their data is processed. A simple digital checkbox to provide consent might not imply that the individual has been informed and fully understood his or her consent. As such, in relation to medium- to high-risk applications of digital components it is particularly relevant to ensure that individuals have fully understood what data is collected about them. Consider how informed consent from individual persons has been obtained and write it down. It can be an important exercise to reflect on how exactly that has happened.

Recommendation - Yes

While having gotten consent is essential, it is also important to assess whether the consent provided has been free and informed. In some cases, it is doubtful whether consent has been given freely, particularly if the option of not providing consent was not a realistic option in practice. Consider how informed consent from individual persons has been obtained and write it down. It can be an important exercise to reflect on how exactly that has happened and what would need to be improved. 

Potentially impacted human rights and principles - No

Right to privacy, right to freedom information and right to participation

Recommendation - No

Review your efforts to inform individuals of how their data is used, as well as the processes in place for them to provide consent. Engage with individuals to assess how these efforts can be improved. If informed consent cannot be secured, consider whether to cease the processing of personal data. 

Potentially impacted human rights and principles - I don't know

Right to privacy, right to freedom information and right to participation

Recommendation - I don't know

Engage with the Project-Executing Agency (PEA), consultant, software contractor and others involved in the development and use of the digital solution/s or tool/s, and ask whether informed consent has been achieved for data collection and other forms of processing, and how that has been achieved (if relevant). In some casesof having obtained consent, it is doubtful whether consent has been given freely, particularly if the option of not providing consent was not a realistic option in practice. Consider how informed consent from individual persons has been obtained and write it down. It can be an important exercise to reflect on how exactly that has happened and what would need to be improved. 

Here is a fictitious case illustrating the meaning of informed consent:

Informed Consent in E-Learning Platforms

Purpose: The fictitious Global Learning Institute (GLI) uses the EduLearn platform to provide online courses and training programs to students worldwide. The platform offers interactive features such as video lectures, quizzes, forums, and personalized progress tracking. To comply with data protection laws and ensure transparency, GLI must inform users about the data collected by the EduLearn platform and obtain their informed consent before they begin using the service. Only by providing clear information, obtaining explicit consent, and offering ongoing control over personal data, GLI ensures that users of the EduLearn platform are fully informed and can make knowledgeable decisions about their data. This approach not only complies with legal requirements but also builds trust and transparency with users.

Example of Informed Consent Process:

Information Provision:

When a new user registers for an online course on EduLearn, they are presented with a clear and concise privacy notice. This notice includes:

Types of Data Collected:

Personal Information: Name, email address, date of birth.
Usage Data: Course progress, quiz scores, interaction history.
Technical Data: IP address, device information, browser type.

Purpose of Data Collection:

To personalize learning experiences and track progress.
To communicate with users about course updates and new offerings.
To improve platform functionality and user experience.

Data Sharing:

Data may be shared with third-party service providers for platform maintenance and support.
Anonymized data may be used for research and analytical purposes.

User Rights:

Users can access, correct, or delete their personal data.
Users can withdraw consent at any time without affecting their access to courses.

Data Protection Measures:

Encryption of data during transmission and storage.
Regular security audits and compliance with data protection regulations.

Consent Mechanism:

Before completing the registration process, users are required to read the privacy notice and provide their consent. This is done through a consent form that includes:

A checkbox to indicate that they have read and understood the privacy notice.
A statement affirming their consent to the collection, processing, and sharing of their data as described.
An option to customize their consent preferences, such as opting out of certain types of data processing or communications.

Example of a Consent Statement: “I have read and understood the Privacy Notice provided by Global Learning Institute. I agree to the collection, processing, and sharing of my personal data as described. I understand that I can withdraw my consent at any time.”

Confirmation:

Upon providing consent, users receive a confirmation email summarizing their consent choices and providing information on how to access and manage their data preferences in the future.

Follow-Up and Transparency

Access and Management:

Users can log into their EduLearn account at any time to review and update their consent preferences through the “Privacy Settings” page.

Regular Updates:

GLI periodically reviews and updates its privacy notice and consent mechanisms to ensure ongoing compliance with data protection regulations and to address any changes in data processing practices.

Do the digital solution/s or tool/s follow the principles of data minimization?

Data minimization means that data collected, used or stored should be adequate, relevant and limited to what is necessary for the intended purposes. Where data is not minimized, risks to the right to privacy increase.

Comment - Yes

If the digital solution/s or tool/s follow the principles of data minimization, then this has significantly reduced the risks related to the right to privacy. Continuously assess whether the amount and type of data collected and used should be changed, and whether there is data that can be deleted. 

Potentially impacted human rights and principles - No

Right to privacy

Recommendation - No

If the digital solution/s or tool/s DO NOT follow the principles of data minimization yet, then reach out to the Project Executing Agency (PEA), its software developer, the consultant and others involved in the development or use of the digital solution/s or tool/s, to review your project’s data processing practices and ensure that data is minimized, since this can significantly lower human rights risks. Minimizing data can include e.g. initially collecting limited amounts of data as well as developing sunset clauses that ensure that data is only kept for a limited period of time after which it is deleted. 

Potentially impacted human rights and principles - I don't know

Right to privacy

Comment - I don't know

In case you are not sure, if the digital solution/s or tool/s do follow the principles of data minimization, reach out to the Project-Executing Agency (PEA), its software developer, the consultant and others involved in the development or use of the digital solution/s or tool/s, to review your project’s data processing practices and ensure that data is minimized, since this can significantly lower human rights risks. Minimizing data can include e.g. initially collecting limited amounts of data as well as developing sunset clauses that ensure that data is only kept for a limited period of time after which it is deleted. 

Consider the following examples:

A ‘smart’ public employment service automatically sends job-seekers a general questionnaire, which includes specific questions about health conditions that are only relevant to particular manual occupations. It would be irrelevant and excessive to obtain such information from an individual who applied for an office job. Data must thus be minimized from the outset or subsequently deleted when found it is not relevant. 


An online classroom is set up for remote education during Covid-19 school closure. Parents are sent a general questionnaire, which includes questions about health conditions in order to assess accessibility questions. However, only certain disabilities are relevant and to obtain further health information would be excessive. Data collection, use and storage must thus be minimized. 

Do you believe there is a risk that anonymized personal data can be made into identifiable personal data again?

For example, artificial intelligence can be applied to re-identify the individuals linked to anonymized 'individual data' by combining various data sources and data points. Therefore, the risks to the right to privacy can remain even after data has been anonymized.

Potentially impacted human rights and principles - Yes

Right to privacy

Recommendation - Yes

If you do believe there is a risk that anonymized personal data can be made into identifiable personal data again:

First, review whether data collected and used could be further minimized. Revisit this as the project progresses since it may be possible to identify unnecessary data at later stages of a project. Second, review how collected data might be or is shared to third-parties, since that may increase the risk of de-anonymization. 

Comment - No

If you do NOT believe there is a risk that anonymized personal data can be made into identifiable personal data again:

There is often some risk of this occurring (see further references), and as such it is important to continuously review this risk as data is considered to be shared with third-parties. 

Potentially impacted human rights and principles - I don't know

Right to privacy

Recommendation - I don't know

In case you don’t know if there is a risk that anonymized personal data can be made into identifiable personal data again:

There is often some risk of this occurring (see further references), and as such it is important to continuously review this risk as data is considered to be shared with third-parties. 

AI applications can be used to identify and thereby track individuals across different devices, in their homes, at work, and in public spaces. For example, while personal data is routinely (pseudo-)anonymised within datasets, AI can be employed to de-anonymise this data. Facial recognition is another means by which individuals can be tracked and identified, which has the potential to transform expectations of anonymity in public space. Typical risks are, for example:

  • Data triangulation occurs when anonymized data sets are combined with other data sets, making it possible to identify individuals; if an anonymized health record is matched with public voter records, re-identification can happen.
  • Inadequate anonymization techniques pose another risk; if the anonymization process is not thorough enough, patterns in the data might still reveal individual identities, such as when dates or locations are not sufficiently generalized.
  • Advances in technology also increase risks, as sophisticated algorithms and powerful computing capabilities can analyze and cross-reference vast amounts of data to uncover identities, such as using machine learning to predict identities from browsing habits.
  • Unique data points within anonymized data can inadvertently reveal identities, like a rare medical condition combined with demographic data pointing to a specific individual.
  • Lastly, insider threats involve individuals with authorized access to anonymized data using their knowledge to re-identify subjects, for example, a disgruntled employee exploiting their understanding of the data structure and context.

Have you considered the security of the data that is collected and shared via the digital solution/s or tool/s?

IT-Security or Cybersecurity can be defined as the preservation – through policy, technology, and education – of the availability, confidentiality and integrity of information and its underlying infrastructure so as to enhance the security of persons both online and offline as well as infrastructure and environment relevant to them.

Comment - Yes

You’ve considered an important aspect of protecting the right to privacy and other human rights that could be at risk if personal (and potentially sensitive) data or sensitive technical data would be leaked.

Check if you need to consider additional data security safeguards in fragile and conflict contexts.

Potentially impacted human rights and principles - No

Right to privacy, right to freedom of expression, right to access to information

Recommendation - No

If you have NOT considered the security of the data yet:

Assess the security of the data that you collect and share in your project or via your solution(s) / tool(s). Ensure that sufficient safeguards are in place to keep the data secure. Use additional safeguards in fragile and conflict contexts.

Potentially impacted human rights and principles - I don't know

Right to privacy, right to freedom of expression, right to access to information

Recommendation - I don't know

In case you are not sure if you have considered the security of the data:

Reach out to the Project-Executing Agency (PEA), its software developer, the  consultant and others involved in the development or use of the digital solution(s) or tool(s), to assess whether the collected data is securely stored. Ensure that sufficient safeguards are in place to keep the data secure. Use additional safeguards in fragile and conflict contexts.

Further information: 

 

Information security, i.e. IT-security, refers to the protection of information systems from unauthorized access, use, disclosure, disruption, modification, or destruction. In DC projects, Information security is essential to protect sensitive information and systems from potential harm. In the following situations, information security and data protection especially are relevant:

  • DC projects involving valuable or sensitive information, such as personal data, financial data, or other confidential information.
  • DC projects for critical infrastructure, e.g. power grids, hospitals etc.

Each of these risks respectively makes the strict implementation of Information security within the project a major prerequisite to protect the data and systems used in the project.

What are fundamental principles of information security?

First, the respective data needs to be to be analyzed according to security risks. Based on this analysis appropriate actions must be taken. For example, public data might need to be protected against unauthorized changes but does not need to be protected against access.

As a necessary, but not sufficient requirement, the Recipient, Project Executing Agency (PEA) and / or consulting company must ensure that its own information security procedures comply with the national law or in the case of the UN acting as PEA, with its own data protection policy. However, it is recommended for all project partners to check and comply with the following fundamental principles of information security that should be followed in practice.

  • Confidentiality – How important is it that information is protected from unauthorized disclosure and how is the appropriate protection implemented?
  • Integrity – How the information should be protected from unauthorized modification and how crucial is this protection
  • Availability – How fast is the information needed and how can the required access be assured?

Which basic practices are mostly necessary to make sure that information security rules are complied with?

The importance and complexity of Information security makes it necessary to have appropriate IT-governance structures within the PEA and the partners, which ensure the practical implementation.  The IT governance should involve the following components:

  • People– to ensure that the appropriate knowledge and skills are present where needed.
  • Process– to ensure that adequate policies and standards are in place and enforceable
  • Technology – to ensure that required technologies for IT security is available

A commonly used governance model is the implementation of an Information security officer (CISO) whose task is to ensure compliance with Information security policies laid out in national law or, in the case of international projects, with selected international standards. Some principles to ensure that the tasks can be successfully implemented are:

  • Direct reporting to the respective management.
  • Obtain required independence to perform their tasks.
  • Information security skills need to be present in the project team and in the PEA throughout the project delivery lifecycle.
  • Ensure involvement in all issues relating to Information security.
  • Ensure sufficient resources to perform tasks.
  • Ensure that the CISO is not penalized for performing their duties and does not result in a conflict of interest.

In addition, regular staff trainings and workshops on Information security are essential supportive measures to enforce the practical implementation of Information security rules.

The following example highlights the complex IT-security risks associated with international development projects, especially in critical infrastructure sectors like electricity. By implementing comprehensive security measures and fostering a culture of vigilance, the project can achieve its goals while safeguarding against potential cyber threats.

Case Study: IT-Security Risks in an International Development Cooperation Project for Financial Cooperation in the Electricity Sector

The “Green Energy Initiative” is an international development cooperation project aimed at enhancing the electricity infrastructure in a partner country of German development cooperation. The project, funded by a coalition of international financial institutions, focuses on building a robust and sustainable power grid that incorporates renewable energy sources such as solar and wind power. The initiative involves multiple stakeholders, including local governments, international consultants, technology providers, and financial institutions.

Project Components

  1. Infrastructure Development: Construction of solar farms and wind turbines, along with upgrading the existing power grid.
  2. Smart Grid Implementation: Deployment of smart meters and grid management software to optimize energy distribution and usage.
  3. Financial Management System: A comprehensive financial management system (FMS) to handle project funding, transactions, and reporting.
  4. Stakeholder Collaboration Platform: An online platform for communication and collaboration among all project stakeholders.

Identified IT-Security Risks

  1. Cyber Attacks on Critical Infrastructure:
    • Scenario: Hackers infiltrate the control systems of the power grid.
    • Impact: Disruption of electricity supply, causing blackouts in major cities and hindering the progress of the initiative.
    • Mitigation: Implementation of advanced intrusion detection systems (IDS) and regular security audits.
  2. Data Breach in Financial Management System:
    • Scenario: Unauthorized access to the FMS, leading to the theft of sensitive financial data.
    • Impact: Financial losses, compromised integrity of financial transactions, and loss of trust among international financiers.
    • Mitigation: Encryption of financial data, multi-factor authentication (MFA), and strict access controls.
  3. Phishing Attacks on Stakeholders:
    • Scenario: Phishing emails sent to project stakeholders, tricking them into disclosing login credentials.
    • Impact: Unauthorized access to the collaboration platform, leading to data leaks and manipulation of project documents.
    • Mitigation: Regular phishing awareness training and implementation of email filtering systems.
  4. Ransomware Attack on Project Management Software:
    • Scenario: Ransomware infects the project management software, encrypting essential project documents.
    • Impact: Delays in project timelines, increased costs, and potential data loss.
    • Mitigation: Regular backups, use of anti-ransomware tools, and an incident response plan.
  5. Insider Threats:
    • Scenario: A disgruntled employee with access to sensitive systems deliberately sabotages the project.
    • Impact: Data manipulation, project delays, and reputational damage.
    • Mitigation: Regular monitoring of employee activities, stringent access controls, and a whistleblower policy.

Which kind/s of AI application/s is/are used in the digital solution/s or tool/s?

Select an option to see a further description of the application. Choose the 'other' option if your project or solution uses a different form of AI application than those listed.

Comment - Image and object recognition

This relates to the analysis of large data sets to automate the recognition, classification, and context associated with an image or object. It includes the likes of face recognition but also analysis of vast amounts of satellite photos in order to predict migration patterns. 

Image and object identification AI risks negatively impacting the right to privacy. Particular attention should be paid to the datasets used, how the data is collected, and how the recognition technology functions. Facial recognition technology may for example have far reaching consequences on the right to privacy, freedom of assembly, right to a fair trial, and many other rights. 

AI risk assessments often make predictions of risks based on historical data. If previous risk assessments have been biased, the predictions made by the AI application will also be biased unless the bias is corrected. Pay particular attention if the risk assessments inform decisions that may have significant impacts on rights holders the outcomes might have severe impacts on human rights. For example, if risk assessments are used within judicial systems, or in relation to distributing public benefits or not. 

Comment - Process optimization and workflow automation

Process optimization & workflow automation concerns the analysis of large data sets to identify anomalies, cluster patterns, predict outcomes or ways to optimize and automate specific workflows. This may include e.g. ‘smart chat bots’ that can help to assess which users are in need of more extensive assistance and those who can be redirected to FAQ parts of a website, and to identify healthcare patients that should be prioritized for certain procedures. 

Process optimization and workflow automation rarely pose significant risks to human rights if they concern strictly administrative tasks. However, the same process can also be used to speed up decisions concerning social assistance, issuance of permits, or other forms of licensing, which all may impact negatively on human rights. There may also be issues related to the right to access information and access to remedy as the underlying AI model can be difficult to explain and understand, which thereby makes it difficult to appeal. 

Comment - Text and speech analysis  

Text and speech analysis concerns the analysis of datasets to recognize, process, and tag text, speech, voice, and make recommendations based on the tagging. This may include text-to-speech technologies that can help blind individuals with accessibility of written content. 

Potential human rights impacts related to text and speech analysis includes public service ‘chat bots’ that are not able to assist individuals from minority groups who do not speak the main language in a country. Language assistants that are perceived as human may also cause distress for individuals using the service. 

Comment - Content generation 

Content generation concerns the analysis of large data sets to categorize, process, triage, personalize, and serve specific content for specific contexts. This may include e.g. automatically generated news media pieces or weather forecasts based on AI review of other news sources or weather data, as well as generating and disseminating content based on official government statements and answers from ‘smart chat bots’. 

The impacts of content generation depend on the content generated. Examples of human rights impacts include if the content generated feeds into other AI systems, making the entire process highly opaque and difficult to understand, potentially rendering decisions that no one can explain.

Comment - Risk assessment

Risk assessments concern the analysis of large datasets to identify patterns and recommend courses of action and in some cases trigger specific actions. This may include e.g. automated credit risk scoring by banks, recidivism risks in a justice system, or risks of flooding or extreme weather events. 

For more information about different AI tools and solutions and their potential human rights impacts, see: 

For illustration of typical risks, here are some examples of technical AI applications used in digital solutions or tools for international development cooperation that have the potential for human rights risks:

  • Facial Recognition Technology:
    • Violations of privacy, misuse for surveillance and tracking, discrimination and bias against minority groups.
  • Predictive Policing:
    • Reinforcement of existing biases, targeting of specific communities, lack of transparency and accountability in decision-making processes.
  • Automated Decision-Making Systems:
    • Lack of human oversight, opaque decision-making processes, discrimination in areas such as social services and employment.
  • Natural Language Processing (NLP) in Monitoring and Evaluation:
    • Misinterpretation of context, language bias, infringement on freedom of expression and privacy.
  • Healthcare Diagnostics and Treatment Recommendations:
    • Misdiagnosis due to biased training data, inequality in healthcare access, inadequate consideration of local medical practices and conditions.
  • Biometric Identification Systems:
    • Invasion of privacy, risk of identity theft, exclusion of individuals without biometric records.
  • Automated Content Moderation:
    • Censorship, suppression of free speech, errors in distinguishing harmful content from legitimate expression, and cultural insensitivity.
  • Credit Scoring and Financial Inclusion Tools:
    • Discrimination based on socio-economic status, lack of transparency, exacerbation of existing inequalities.
  • Remote Sensing and Data Collection (e.g., Drones, Satellite Imaging):
    • Privacy invasion, misuse of collected data, lack of informed consent from communities being monitored.
  • Chatbots and Virtual Assistants:
    • Misinformation, inadequate responses in crisis situations, potential breaches of confidential information shared by users.

These typical AI applications (of course, in real life there are more!) have significant potential to benefit international development projects but must be carefully managed to mitigate risks to human rights and ensure ethical and fair use. In order to be on the safe side, go beyond these examples when considering to make use of artificial intelligence.

Are there processes in place to assess biases in datasets?

Bias is a form of injustice against an individual (or group). While there are different kinds of bias in AI applications, this question concerns bias in datasets such as data that is not representative or data that reflect prejudices (e.g. an AI recruiting tool fed with historical hiring decisions which favored men over women). It may be necessary to discuss the question of bias with the developer of the AI application in order to respond. See resources for more information.

Comment - Yes

That means one significant aspect of potential negative human rights impacts has been considered. However, AI tools are only as good as the data they rely on and it is therefore important to continuously assess whether the data is valid, reliable, current, sufficient etc. Existing data that is biased or discriminatory is going to produce results that are biased or discriminatory.

Potentially impacted human rights and principles - No

Right to equality and freedom from discrimination

Recommendation - No

Plan and conduct an assessment of potential biases in the datasets and develop a plan for how bias can be mitigated.

Potentially impacted human rights and principles - I don't know

Right to equality and freedom from discrimination

Recommendation - I don't know

Consult the Project-Executing Agency (PEA) and others involved in the development of the AI model. If they are not aware of any processes to assess biases in datasets, it is likely that bias has not been assessed.

Algorithm Watch: The Use of Automated Decision-Making Systems in the Public Sector

The use of ADMS has arrived in the public sector. In the years to come, the automation of decision-making procedures and services in public administrations is likely to increase exponentially. Citizens demand user-friendly services that are simple, easily accessible, and available 24/7. Administrations regard automation as a chance to accelerate efficiency, facilitate processes, and expedite mass and routine services. However, given the unique context in which public authorities act, the deployment of ADMS should be accompanied by a systematic evaluation of potential ethical implications, ensuring transparency and accountability visà- vis those affected. ADMS comes with substantial risks—especially if such systems are not introduced and deployed in a careful manner.

By means of the fictional example of a Swiss COMPAS risk evaluation system for criminal offenders, the application of the evaluation procedure is illustrated in section 4 of this publication. See also the example of ADMS deployed in the context of social welfare, page 10/11.

AlgorithmWatch developed a concrete and practicable impact assessment tool, ready to be implemented for the evaluation of specific ADMS by public authorities at different levels. It focuses on seven values or principles:

  • four ethical principles, namely respect for human autonomy, prevention of harm, justice or impartiality [fairness], and beneficence
  • three instrumental principles, namely control, transparency, and accountability, summarizing technical, organizational, and prudential requirements

To demonstrate the relevance, here is an excerpt of the checklist with questions offered by Algorithm Watch (page 44):

  • 2.12 What methodologies have been used to define and measure the bias and fairness of the system?
  • 2.13 How are individual predictions/recommendations/ decisions of the system explained to system end-users and individuals affected by the use of the system?
  • 2.14 Is system deployment continuously monitored after the testing phase … a) at all times? b) within a given timeframe? c) through which measures?
  • 2.15 Are there options for people affected by a decision to learn about the output of the automated system and to challenge predictions/ recommendations/decisions influenced by the system?
  • 2.20 During monitoring, have predictions/recommendations/ decisions by the system ever been challenged?

Source: Algorithm Watch, Automated Decision-Making Systems in the Public Sector

FAIR Forward – Artificial Intelligence for All initiative

  • Natural language processing AI can be used to generate and share information in a targeted and individual way. The same type of system can also be used to reach people who cannot read. In order to train such systems there is a need to have a large amount of training data. For a variety of reasons such necessary language data has been very limited from African and Asian nations. The ‘FAIR Forward – Artificial Intelligence for All’ initiative was developed in order to close this gap concerning language data and provide fair and open access to it. On the back of this work, and the increasing availability of national language data and outcome of the Fair Forward initiative has been the development of a chatbot in Rwanda, that will help millions of people there receive coronavirus advice in their local language. / Source: GIZ, “Artificial Intelligence For All

Use of AI in corona response activities

  • “Data analytics supported by AI is having a predominant role in monitoring of the infections, deaths and recovery rates all over the world. Institutions have managed to draw the trends that Covid-19 has in different countries and conditions. Data collection, monitoring and analysis technologies are also needed in developing countries in order to track the virus progress and adapting the response and mitigating measures.
  • Artificial Intelligence is being used to detect coronavirus through computerized tomography scans analysis. Imaging departments in healthcare facilities are being overwhelmed with the increased workload created by the virus. This solution improves CT diagnosis speed. Some big tech companies claim to have built AI powered analysis systems able to read scans in seconds giving a 96% accurate diagnosing of the virus. In African countries, lacking of specialised doctors and labs, these technology can be disruptive.
  • Intelligent tracing and mobile technologies are being used by governments all over the world to track people movements and prevent further expansion of the virus. The EU needs to support African governments in using the large amount of mobile data available considering the privacy-conscientious use of mobile phone data. Technology solutions are available allowing tracing people movement taking into account the risks of misuse of personal data.” / Source: Toolkit Digitalisierung, “Practice: Big Data and AI

Have biases in (unexpected) outputs and outcomes related to the AI application/s been assessed?

This question concerns the bias in the outputs and outcomes of the AI application/s, as opposed to the underlying data. Consider whether anyone has assessed that specifically.

Comment - Yes

That means you have considered one significant aspect of potential negative human rights impacts of automated decision-making systems. 

Potentially impacted human rights and principles - No

Right to equality and freedom from discrimination

Recommendation - No

Plan and conduct an assessment of biases in outputs and outcomes related to the AI.

Potentially impacted human rights and principles - I don't know

Right to equality and freedom from discrimination

Recommendation - I don't know

Consult the Project-Executing Agency (PEA), its software developer, the consultant and others involved in the development of the AI model. If they are not aware of any processes to assess biases in outcomes, it is likely that bias has not been assessed. 

As an example, an AI model aimed at analyzing facial expressions of individuals and detect the emotional state of the individual might have perfectly unbiased data to learn from and is therefore not discriminatory in its development and design. However, since the science behind emotional recognition technology is questionable, there is a risk that any outputs of this AI model are discriminatory. This is particularly true when the context is not taken into account and may therefore have discriminatory impacts on for example individuals from ethnic minority groups or persons with disabilities.

Source: Vox Recode, “Artificial intelligence will help determine if you get your next job“, 12 Dec 2019

Further examples of possible biases in unexpected outputs and outcomes related to AI applications used by international development projects (of course, this is not an exhaustive enumeration of typical cases):

Healthcare Diagnostics:

An AI system trained primarily on data from developed countries misdiagnoses diseases in underserved populations in developing countries because it doesn’t account for local variations in disease presentation and prevalence.
Risk: Increased misdiagnosis rates and improper treatment plans for marginalized communities.

Facial Recognition Technology:

A facial recognition system used to verify identities for aid distribution shows higher error rates for darker-skinned individuals due to biased training data.
Risk: Disproportionate exclusion of racial and ethnic minorities from receiving aid.

Predictive Policing:

Predictive policing algorithms disproportionately target certain neighborhoods based on historical crime data that reflect existing biases in policing practices.
Risk: Over-policing of marginalized communities and reinforcement of discriminatory practices.

Credit Scoring and Financial Inclusion:

AI-driven credit scoring models unfairly penalize individuals without formal financial histories, often affecting women, rural residents, and low-income populations.
Risk: These groups face greater barriers to accessing financial services, exacerbating economic inequality.

Language Translation and NLP:

An NLP-based educational tool performs poorly in translating or understanding local dialects and languages, which are underrepresented in the training data.
Risk: Ineffective communication and learning outcomes for speakers of these dialects, widening the educational gap.

Automated Content Moderation:

An automated content moderation system used on social media platforms in conflict zones incorrectly flags posts in minority languages as harmful, while missing similar content in dominant languages.
Risk: Suppression of minority voices and misinformation spread unchecked, impacting freedom of expression and access to information.

Job Matching and Employment Services:

AI-powered job matching platforms favor candidates from urban areas with conventional education backgrounds, overlooking qualifications and experiences relevant to rural or non-traditional candidates.
Risk: Biased job placements, perpetuating employment inequality and limiting economic opportunities for diverse candidates.

Has a specific gender-based analysis of the development and use of the AI solution/s been conducted?

This may include assessing whether predictions made by an AI model are discriminatory on the basis of gender. See resources for further information.

Comment - Yes

That means you have considered one significant aspect of potential negative human rights impacts of AI systems. 

Potentially impacted human rights and principles - No

Right to equality, freedom from discrimination and women’s rights

Recommendation - No

Plan and conduct a gender-based analysis of the implementation of the project or solution/s or tool/s.

Potentially impacted human rights and principles - I don't know

Right to equality, freedom from discrimination and women’s rights

Recommendation - I don't know

Consult the Project-Executing Agency (PEA), its software developer, the consultant and others involved in the development of the AI. If they are not aware of any specific gender-based analysis of the system, it is likely that this has not been assessed. 

Many virtual personal assistants (such as Siri, Alexa and Cortana) and chatbots have female names and come with a default female voice. The organization that develops or uses these virtual assistants with ‘female features’ may reinforce stereotypes as well as the social reality in which a majority of personal assistants or secretaries in both public and private sectors are women. It will therefore always be important to assess any unexpected negative impacts related to gender.

Here are six more generic examples of specific gender-based biases in digital projects, applications, or tools:

Health Monitoring Apps:

Example: Health and fitness tracking apps often lack comprehensive features for women’s health issues, such as menstrual cycle tracking, pregnancy, and menopause, leading to an oversight of important health data for women.
Impact: This can result in inadequate health recommendations and neglect of women’s specific health needs.

Job Recruitment Algorithms:

Example: AI-driven recruitment tools trained on historical hiring data may show bias against women by favoring resumes with male-associated names or experiences in male-dominated fields.
Impact: Women may be unfairly excluded from job opportunities, perpetuating gender inequality in the workforce.

Loan Approval Systems:

Example: Credit scoring algorithms that rely on traditional financial history and employment patterns may disadvantage women, particularly those with gaps in employment due to caregiving responsibilities.
Impact: Women might face higher barriers to obtaining loans, affecting their financial independence and ability to start businesses.

Voice Recognition Software:

Example: Voice recognition systems often perform better with male voices than female voices due to biased training datasets.
Impact: Women might experience higher error rates and frustration when using voice-activated devices, reducing their usability and accessibility.

Content Recommendation Systems:

Example: Algorithms on platforms like YouTube or Netflix may reinforce gender stereotypes by recommending content based on traditional gender roles (e.g., recommending cooking or parenting videos to women and tech or sports videos to men).
Impact: Such recommendations can reinforce stereotypes and limit exposure to diverse content.

Smart Home Devices:

Example: Home automation systems might be designed with male users in mind, neglecting features that consider the preferences and needs of female users, who are often primary managers of household activities.
Impact: Women may find these systems less intuitive or useful, affecting their efficiency and satisfaction with the technology.

These examples highlight the importance of addressing gender bias in digital projects to ensure that technologies are inclusive, equitable, and beneficial for all users.

For more, see Surya Deva, “Addressing the gender bias in artificial intelligence and automation“, OpenGlobalRights, 10 April 2020

Has the AI system been tested for human rights risks after its deployment, or is there a clear plan for such testing?

Consider whether for example a chatbot has been assessed after its deployment to understand whether any particular groups are at risk of discrimination.

Comment - Yes

Constant monitoring, evaluation and retraining are essential practices to identify and correct embedded bias and disparate outcomes.

Potentially impacted human rights and principles - No

Right to equality and freedom from discrimination

Recommendation - No

Develop a plan and test the AI system for accuracy and human rights risks after its deployment.

Comment - No

Even the most well-intended algorithms can have unintended consequences. Constant monitoring, evaluating and retraining are essential practices to identify and correct embedded bias and disparate outcomes

Potentially impacted human rights and principles - I don't know

Right to equality and freedom from discrimination

Recommendation - I don't know

Consult the Project-Executing Agency (PEA), its software developer, the FC consultant and others involved in the development and/or deployment of the AI system. If they are not aware of any such processes, it is likely that the system has not been tested for biased or discriminatory outcomes after its use and deployment. 

Some health systems rely on AI risk-prediction tools in order to identify and select patients who are in need of high-risk care. If selected to these programs, the patients will be provided additional resources, as well as receive greater attention from trained providers, to help ensure that care is well coordinated. Many health systems have the AI tools as key aspects of their population health management efforts since they are considered effective at improving outcomes and satisfaction while reducing costs, which is often a pursued goal in improving healthcare in emerging economies. These high-risk care programs are very expensive to run and therefore health systems often rely on AI tools to identify patients who will benefit the most from the programs. 

In attempting to identify which patients will get the most benefit for entering such programs, the AI tools must make some inference based on the data that is available. One way of identifying the greatest care need is to look at how much the patients have historically paid for their care. That, however, can turn out to be discriminatory against marginalized populations who have had a lower ability to allocate significant resources to their care, even though the need might have existed. 

In this scenario, the problem should have been identified pre-deployment already. However, if that was not the case, post-deployment analysis could illustrate the socio-economic disparities in enrolment in the program and identified the reasons for them. 

For more, see: Obermeyer et al., “Dissecting racial bias in an algorithm used to manage the health of populations“, Science 366, 447–453 (2019). 

Is/are the digital solution/s or tool/s used or applied in a high-risk country context with regard to human rights?

A high-risk context is likely to increase the potential risks for negative human rights impacts. Digital risks are less likely to be dealt with by institutions of rule of law. In addition, there may not be enough safeguards in place to prevent or mitigate risks presented by digital transformation projects and advisory services.

Recommendation - Yes

If the digital solution/s or tool/s is/are used or applied in a high-risk country context with regard to human rights:

A high risk country context often involves poor rule of law, significant limits to civil and political rights, e.g. ethnic or religious discrimination as well as limited freedom of speech and opinion and/or limited freedom of the press.

In such contexts, some types of digital solutions/tools can be abused for identifiying and physically harming civilians, but also for oppressing, exposing, discriminating and/or excluding them from services and/or society.

Heightened attention will therefore need to be paid to all kinds of personal data processing and AI applications, particularly with regard to groups that are vulnerable or marginalized in the specific context. Where applicable, review and update existing country context analysis with the specific digital solution/s or tool/s in mind.

Recommendation - No

Even if the digital solution/s or tool/s is/are NOT used or applied in a high-risk country context with regard to human rights:

A low-to-medium risk country context often means that rule of law is stronger and that there are multiple safeguards in place at the national level.

Even in a country context with lower human rights risks, it could be that such risks have not been sufficiently taken into consideration, since the solution/s or tool/s might be new in this context. It is therefore advisable to review the relevant country context with respect to the digital solution/s or tool/s  planned or used within the project.

Recommendation - I don't know

If you don’t know whether the digital solution/s or tool/s is/are used or applied in a high-risk country context with regard to human rights:

Where applicable, consult existing country context analysis, discuss relevant risks with your safeguards experts and consider whether the country context is high-risk or not as regards digital risks.

The relevance and severity of the following criteria may constitute a high risk country context:

– no, or weak, data protection legislation
– poor rule of law institutions
– conflict-affected or fragile country: deficits of basic public service delivery, weak state monopoly of power, insufficient democratic legitimacy
– unstable security situation, including high crime rates
– significant persecution of human rights defenders and corresponding civil society organizations
– significant limits to civil and political rights, e.g. ethnic or religious discrimination, gender inequality and discrimination of sexual minorities
– limited freedom of speech and opinion and/or limited freedom of the press and media
– significant limits to economic or social rights of women, ethnic or religious minorities, indigenous people, youth, persons with disabilities or migrant workers, refugees, asylum seekers, internally displaced people
– possibly forced labour or even slavery
– frequent violations of environmental law
– censorship, intimidation, violence, especially against marginalized groups
– cybercrime
– online harassment

In case of doubt, consider answering yes to the question since the country may be high-risk as far as digital risks are concerned.

In any case, it is recommended to conduct a country-level risk assessment on the solution/s and tool/s you are using or planning to use.

Resources for country context: 

For a short and incomplete overview of widely discussed cases of human rights risks of digital solutions in high-risk contexts, we offer the following examples:

  1. In high-risk contexts, governments or other actors may use digital tools to collect data on individuals without their consent, leading to potential violations of privacy rights. This can include monitoring online activities, accessing personal communications, and using biometric data. The OHCHR Report on the Right to Privacy in the Digital Age (2021) emphasizes the need for safeguards against unlawful or arbitrary surveillance.
  2. Digital solutions can be used to censor information, restrict access to information, or manipulate content to suppress dissent and control public narratives. The UNESCO report on World Trends in Freedom of Expression and Media Development discusses global challenges to freedom of expression, including online censorship.
  3. High-risk countries may face increased cybersecurity threats, including hacking, cyber-attacks, and misuse of data by malicious actors, which can undermine human rights. A striking example would be cybercrime like online child sexual abuse.
  4. Digital tools can be used to spread misinformation or disinformation, influencing public opinion and potentially inciting violence or discrimination (see for example the UNESCO report Journalism, fake news & disinformation (2018). The Council of Europe report on Information Disorder (2017) describes three types of information disorder: dis-information, mis-information and mal-information.
  5. Automated systems and algorithms can perpetuate bias, leading to unfair treatment and decisions in areas like law enforcement, hiring, and access to services (see, for example, the EU Agency for Fundamental Rights focus paper Data quality and artificial intelligence – mitigating bias and error to protect fundamental rights).

Are/is the digital solution/s or tool/s meant to be used in a high-risk project type or service?

High-risk project types are projects, components or services where decisions taken can have severely affectindividuals, this includes:
- Surveillance and remote identification of individuals or households or related information
- Cadastre and land management systems
- Electronic civil registry, eID systems, personalized eCitizen services
- Law enforcement, including police, public prosecution and judiciary
- Migration, asylum and border control
- Students eRegistration, eLearning
- Employment services and workers management
- Critical ICT-infrastructure like data centers, digital connectivity, power grid management systems)
- Essential private service delivery, including banking and insurance.
- Other essential public service delivery, including social security, eHealth and other public benefits and services, especially in case that data on people is centralized and easily accessible for most competent state authorities or in case that a developing country has adopted an unreserved Freedom of Information Act approach.

Recommendation - Yes

If the digital solution/s or tool/s is meant to be used in a high-risk project type or service:

Extra care should be taken in relation to any application or development of digital tools, solutions or services, since the potential human rights risks may be severe.

Consult with the representatives of partner countries’ institutions and with the competent environmental, social and human rights safeguards experts how to fully identify all relevant risks for human, social and economic rights of citizens, and how to deal with the various identified risks according to their risk category. It is recommended to factor a sufficient risk management system into project design and supervision of your project. For orientation, see the ten key criteria and guiding questions of the Danish Institute for Human Rights.

Recommendation - No

If the digital solution/s or tool/s is NOT meant to be used in a high-risk project type or service:

The project can in principle be considered low-to-medium risk, unless the specific application is highly sensitive.

However, there is often a lack of standards or policies that specifically govern identification and management of the full range of digital risks throughout the project cycle. In other cases, standards and policies do not clearly articulate the roles and responsibilities of Financing Institutions, Recipients, Borrowers, Project-Executing Agencies (PEA) and other project implementing partners, or do not provide an appropriate framework for their accountability. Guidance on how to manage “business model risks” in DFI-financed digital operations, more specifically, also seems to be lacking.

Follow the scheduled risk management of your project. Nevertheless it it is recommended to have in mind that growing digital portfolios entail increased risks, and that new and unexpected human rights risks may arise.

 

Recommendation - I don't know

If you don’t now whether the digital solution/s or tool/s is meant to be used in a high-risk project type or service:

It is advisable to consult with KfW colleagues, Project-Executing Agency (PEA) staff, target groups representatives, other donor countries representatives or others involved in the use of the solution and consider whether the sector in question is high-risk or not. In case of doubt, consider answering yes to the question since that still implies that risks may be high.

The use of digital products and services can create new and unexpected human rights risks. These risks include violations of the right to privacy, freedom of expression, freedom of association, freedom from discrimination, and a potentially wide range of other economic, social, cultural, civil and political rights. Especially the marginalized and/or vulnerable groups of people can be exposed to risks of exclusion, of being discriminated against or of higher exposure to inherent digital risks.

For orientation, see the ten key criteria and guiding questions of the Danish Institute for Human Rights.

For more on high-risk services see:

UN OHCHR Human Rights Risks in Tech: Engaging and Assessing Human Rights Risks Arising from Technology Company Business Models

Danish Institute for Human Rights: Key principles for Human Rights Impact Assessments of Digital Business Activities (2023) – criteria and guiding questions

OECD Rights in the digital age – challenges and ways forward (2022)

EU Commission, AI regulation proposal, Annex III 

World Economic Forum Toolkit for Digital Safety Design Interventions and Innovations: Typology of Online Harms

World Economic Forum Global Principles on Digital Safety: Translating International Human Rights for the Digital Context

World Economic Forum Global Risks Report 2024: Misinformation/disinformation, censorship and surveillance and cyber insecurity as important global risks.

Employment services and unemployment benefits can be considered part of a sensitive sector which warrants extra attention. For example, in case that AI models were used to predict the likelihood on successful job applicants and thereby determine resource allocation to ‘the best candidates’, potentially significant risks to human rights are at stake.

The promises of e-health solutions (e.g. digital health platforms, AI systems supporting in cancer diagnosis, tele-medicine apps) have over the last years drawn a lot of interest from investors and the society at large. While the promises are indeed real, there are a lot of risks that must be considered due to the high risk of the sector. Mistakes in the development of the systems can lead to severe impacts on the right to health at scale.

Compare the UN General Assembly Human Rights Council Advisory Committee report ‘Possible impacts, opportunities and challenges of new and emerging digital technologies with regard to the promotion and protection of human rights‘, which has been structured according to the following challenges:

  • Datafication resulting in a loss of privacy and the need to protect personal data: Inadequate data protection measures can result in unauthorized access to sensitive personal data, leading to identity theft, blackmail, or misuse of information. This can lead to the misuse of personal data, e. g. of persons who are seen as political opponents and possibly politically persecuted.
  • Cybersecurity and integrity: Digital infrastructure in fragile states may be particularly vulnerable to cyber attacks, which can disrupt essential services and harm national security.
  • Quality and authenticity of information: Digital platforms can be used to spread misinformation or propaganda, destabilizing societies and undermining democratic processes.
  • Radicalization, segregation and discrimination: AI and machine learning algorithms can perpetuate and amplify existing biases, leading to discriminatory outcomes in areas such as access to services, employment, and justice.
  • Disempowerment and inequality: Digital solutions can widen the economic gap if they predominantly benefit those who already have better access to technology and infrastructure.
  • Mass surveillance and overreaching Internet regulation: Digital platforms may be manipulated to censor dissenting voices, restrict access to information, and suppress freedom of speech. Digital tools can be used by governments or other actors to monitor citizens, leading to breaches of privacy and potential targeting of individuals for political or other reasons.
  • Cyberviolence, esp. online sexual abuse of children

Does the country where the digital solution/s or tool/s are/is developed and/or used have adequate data protection legislation?

Consider whether 1) data protection legislation exists, 2) whether it seems adequate from a human rights perspective and 3) whether the data protection legislation is actually in force. In assessing the adequacy & implementation of data protection legislation, you can refer to the resources section as well as use the EU General Data Protection Regulation (GDPR) as a benchmark. Under the GDPR, the European Commission has also recognized a number of countries as providing adequate protection.

Recommendation - Yes

If your project / partner country has adequate data protection legislation adopted,

it could still be advisable to have a closer look at the data protection law in place, depending on the risk type of digital solution that is part of your project. Besides from the existing legislation, the question of its implementation by adequate national institutions could be another (very) relevant question.

Recommendation - No

If your project / partner country has no adequate data protection legislation adopted,

assess how the data processing activities related to the digital solution/s or tool/s can be limited, particularly as it relates to contexts where there are not adequate data collection and processing standards and processes in place. Pay particular attention to consent for data collection and data minimization, since there will likely be no efficient external oversight of the activities. Take the recommendations of the risk assessment into account.

Recommendation - I don't know

If you don’t know whether your project / partner country has adequate data protection legislation adopted,

consult this question with experts and lawyers, presenting your project to them.

You might preliminarily check the current status of data protection legislation in the country relevant to the project or solution/s or tool/s with the help of information offered by, for example:

Please note that this preliminary information from the Internet may not provide concrete answers to the challenges of your project. In any case, take the recommendations given by the risk assessment into account.

Recommendation - No, but national data protection legislation is not relevant as the project does not process any personal data.

In case that the national data protection legislation is not relevant as the project does not process any personal data,

if in doubt, check the definition of data processing, e. g. according to Art. 4 No. 2 of the GDPR and the relevant definition of personal data, for example, by EU law.

Resources : 

Resources with information on data protection: 

WBG: Country Diagnostics DE4A Country Diagnostics Status (Version June 2023):

  • Digital Economy for Africa (DE4A) Country Diagnostics provide a snapshot of the state of the digital economy in a given country for each of the five pillars of the DE4A initiative (digital infrastructure, digital public platforms, digital financial services, digital businesses, and digital skills).
  • The country diagnostics also cover Policy and Institutional Context.

Practically all digital components, solutions and tools depend on the possibility to collect, store, treat, alter and share data, and many may also increase the possibilities to do just that. If an innovative project on the use of Internet of Things (IoT) devices for public good is implemented in a country without data protection, little can be expected in terms of access to remedy if data is not handled properly. Therefore, it is important to understand the regulatory context and practice that the digital component, solution or tool be implemented or applied in, in order to identify possible risks. If data protection regulation is in place and enforcement is efficient, that lowers the risk of datarelated activities. 

For illustration and further orientation, here are three typical examples of human rights violations by state or private institutions where digital solutions or tools are developed and used without adequate data protection legislation:

  1. Privacy Breaches in Healthcare Data:
    • A healthcare provider in a given country develops and uses a digital health record system without robust data protection measures. As a result, sensitive medical information of patients is exposed in a data breach, leading to privacy violations and potential harm to individuals’ reputations and medical confidentiality.
  2. Surveillance and Targeting of Dissenters:
    • The government of a given country deploys digital surveillance tools developed domestically to monitor activists, journalists, and political dissidents without legal safeguards or oversight. This results in arbitrary arrests, harassment, and intimidation of individuals exercising their freedom of expression and assembly, leading to violations of their human rights.
  3. Discriminatory Employment Practices:
    • A large corporation in a given country implements an AI-powered recruitment tool without adequate data protection legislation in place. The algorithm unintentionally perpetuates biases against certain demographic groups, resulting in discriminatory hiring practices that disproportionately disadvantage women, minorities, or older workers, violating their right to equal employment opportunities.

These examples illustrate how the absence of robust data protection legislation in a development country can lead to various human rights violations when digital solutions or tools are developed and used without proper safeguards.

Are you aware of similar digital solutions or tools that have been developed or used in the same location previously?

This could for example be if similar FinTech solutions or e-health platforms have been used in the same country in the past.

Recommendation - Yes

Since you are planning a similar project,

it could be very useful to engage with the project owners of the previous project. This could help you learn what issues they came across, how they avoided identified pitfalls and how human rights risks were handled more broadly. It may also help you assess which preventive and mitigation measures should be taken in your project during its implementation.

Recommendation - No, there have not been similar projects, solutions or tools in the same location previously

If there have not been any similar projects, solutions or tools in the same location previously,

it might be useful to have a look at similar projects in other countries or regions. Possibly, you could benefit from their lessons learnt.

Recommendation - I don't know whether there have been similar projects, solutions or tools in the same location previously

If you don’t know whether there have been similar projects, solutions or tools in the same location previously,

you might wish to find out whether or not similar solutions or tools have been developed or used in the past (lessons learnt). That could help inform the identification of potential human rights impacts as well as preferred actions to avoid, prevent and address impacts.

In case a project aimed at, for example, developing language databases for local languages has been implemented in your target country or region in the past, and you are planning a similar project, it could be very useful to engage with the project owners of the previous project. This could help you learn what issues they came across, how they avoided identified pitfalls and how human rights risks were handled more broadly. 

Are you aware if the previous digital solutions or tools were involved in negative impacts on human rights?

To answer this you may need to review reports published by the relevant company or project, as well as news media or civil society reports.

Recommendation - Yes, the previous projects were involved in negative impacts on human rights.

In case that previous projects were involved in negative impacts on human rights in the past,

it is advisable to take preventive measures to ensure that the digital project or component is not having the same negative impacts again. Thoroughly assess the digital risks of your solution or project component, and especially consider the potential impacts on the right to privacy, the right to freedom from discrimination, and other relevant human rights as described for example by the OHCHR or the Danish Institute for Human Rights.

Recommendation - No, the previous projects were not involved in negative impacts on human rights.

In case that previous projects were not involved in negative impacts on human rights,

it might be a good idea to assess similar past projects, solutions and tools, possibly also lessons learnt from other countries. The aim would be to have all necessary safeguards available to avoid and prevent possible negative impacts on human rights.

Recommendation - I don't know whether the previous projects were involved in negative impacts on human rights.

If you don’t know whether similar projects were involved in negative impacts on human rights in the past,

it is advisable to reach out to those involved in developing or using the digital solution/tool or others that have knowledge of it, and see whether they are able to share more information about potential impacts (and their prevention or mitigation). Thoroughly assess the digital risks of your solution or project component, and especially consider the potential impacts on the right to privacy, the right to freedom from discrimination, and other relevant human rights as described for example by the OHCHR or the Danish Institute for Human Rights.

For illustration of the meaning of the question ‘Are you aware if the previous digital solutions or tools were involved in negative impacts on human rights?’ we present this fictitious case study on continued risks in digital ID systems for human rights:

An international development cooperation program introduced a digital identity (ID) system in a developing country to enhance access to social services. This digital solution was expected to streamline public service delivery, reduce fraud, and improve governance. However, after the implementation, several negative impacts on human rights emerged, such as exclusion of vulnerable groups, privacy violations, and misuse of personal data. Thus, initial problems were, in this illustrative case:

  1. Exclusion: Marginalized groups, including indigenous people and those without formal documentation, were unable to register, leading to their exclusion from essential services.
  2. Privacy Violations: Inadequate data protection measures resulted in personal data breaches, exposing citizens to potential harm.
  3. Surveillance and Misuse: The system was exploited by government entities for surveillance, leading to the targeting of political dissidents and activists.

For the purpose to identify continued risks, the following steps could be taken in this and similar cases:

1. Retrospective Impact Assessment

The project manager could initiate a comprehensive review to assess the system’s impact on human rights since its inception. This may involve:

  • Stakeholder Consultations: Engaging with affected communities, civil society organizations, and human rights groups to gather firsthand accounts of the system’s impact.
  • Data Analysis: Reviewing data logs and incident reports to identify patterns of exclusion and misuse.

2. Risk Identification Framework

A risk identification framework could be developed to systematically identify potential future risks. This framework may include:

  • Human Rights Criteria: Establishing criteria based on international human rights standards.
  • Risk Scenarios: Creating scenarios where existing vulnerabilities could lead to future risks (e.g., political changes leading to increased surveillance).

3. Expert Consultations

Consultations with digital rights experts and technologists could be conducted to:

  • Evaluate Technological Vulnerabilities: Identify weaknesses in the current system architecture that could be exploited.
  • Future-proofing Strategies: Discuss potential improvements to mitigate identified risks.

4. Continuous Monitoring Mechanism

A mechanism for continuous monitoring might be established, which could well include:

  • Regular Audits: Periodic audits of the system’s operation and its compliance with human rights standards.
  • Feedback Loop: Creating channels for users to report issues and feedback, ensuring that new risks are quickly identified and addressed.

5. Policy and Legal Reforms

One might even engage in a collaboration with government authorities and policy makers to:

  • Strengthen Data Protection Laws: Implement robust data protection regulations to safeguard personal information.
  • Inclusivity Measures: Develop policies to ensure all citizens, especially marginalized groups, can access and benefit from the digital ID system.

The retrospective assessment revealed several continued risks:

  • Persistent Exclusion: Despite initial efforts, certain groups remained excluded due to systemic barriers.
  • Evolving Privacy Threats: As technology evolved, new forms of data breaches and misuse became apparent.
  • Potential for Increased Surveillance: Political changes posed a risk of the system being used for more intrusive surveillance.

Therefore, in such and similar cases, recommendations might be (depending, of course, on the details of the situation):

  1. Inclusive Design Improvements: You may redesign the system to ensure it accommodates the needs of marginalized communities.
  2. Enhanced Security Measures: You may implement advanced encryption and regular security updates to protect against data breaches.
  3. Independent Oversight: You may establish an independent body to oversee the system’s use and ensure it adheres to human rights standards.
  4. Public Awareness Campaigns: You could support the awareness raising of citizens on their rights and the proper use of the digital ID system to empower them to report abuses.

Is/are the digital solution/s or tool/s used or applied in a way that significant human rights impacts may occur?

Significant human rights risks may for example exist where the users or affected individuals are marginalized groups and the solution is used in a way that it can impact those individuals directly or indirectly. However, significant human rights risks may threaten even the majority of the population in a context of poor rule of law, weak law enforcement and feeble civil society. See case study for example.
Of particular concern are the areas outlined below, in which technologies can be, and increasingly are, used to violate and erode human rights, deepen inequalities and exacerbate existing discrimination, especially of people who are already vulnerable or left behind:
- Data protection and privacy
- Digital identity
- Surveillance technologies, including facial recognition
- Online harassment and violence and the need for content governance
- Use of artificial intelligence

Recommendation - Yes

Since you have answered that the digital solution/s or tool/s could be used or applied in a way that significant human rights impacts may occur,

it appears to be advisable to conduct an appropriate human rights impact assessment, especially as regards digital risks.

High-risk solutions and tools require allocation of adequate resources to conduct in-depth assessments of related human rights risks and potential preventive or mitigation measures. This should include stakeholder and rights holder consultation and engagement.

Discuss the potential unintended negative consequences with external stakeholders, including potentially impacted rights holders. Consider the risk of those scenarios materializing and think of preventive strategies to minimize the human rights risks. Feed the results of the risk assessment into the Target Groups and Impacted People Analysis incl. the Gender Analysis of the project appraisal report.

Recommendation - No

Since you have answered that the digital solution/s or tool/s cannot be used or applied in a way that significant human rights impacts may occur,

the project or solution can be considered low-medium risk, unless it concerns a high-risk sector in a high-risk country context.

In those scenarios, there is still need for extra caution.

In any case, assess potential unintended negative consequences with the project team and other partners involved. This could e. g. be in the form of a workshop where future scenarios and worst-case scenarios are discussed. Discuss your findings with external stakeholders, including potentially impacted rights holders, while also leaving space for them to raise any other concerns of potential negative consequences of the project. Take the recommendations into account for the risk assessment.

Recommendation - I don't know

If you don’t know whether or not the digital solution/s or tool/s could be used or applied in a way that significant human rights impacts may occur,

consult with KfW colleagues, Project-Executing Agency (PEA) staff, target groups representatives, other donor countries representatives staff or others involved in the use of the solution. In case of doubt, consider answering yes to the question since it might still be possible that it is a high-risk scenario. Please consider conducting a digital risk assessment asking for the protection of sensitive personal data, the avoidance of exclusion or discrimination of vulnerable groups of people, or other negative human rights impact as described, for example, by the OHCHR or the Danish Institute for Human Rights.

UN OHCHR Hub for Human Rights and Digital Technology

The Danish Institute for Human Rights: Key principles for human rights impact assessment of digital business activities

The Danish Institute for Human Rights: Human rights impact assessments of digital activities

Tiina Pajuste: Specific threats to human rights protection from the digital reality RECOMMENDATIONS

Council of Europe A study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework (2019)

UN Report of the Secretary-General Road map for digital cooperation: implementation of the recommendations of the High-level Panel on Digital Cooperation

See this video from the Human Rights, Big Data and Technology project which illustrates how Artificial Intelligence can affect human rights. 

Employment services and unemployment benefits can be considered part of a sensitive sector which warrants extra attention. In case that a project simply includes AI models that are used to improve scheduling of meetings with those seeking unemployment benefits this is unlikely to imply significant risks to human rights.

If, on the other hand, AI models were used to predict the likelihood on successful job applicants and thereby determine resource allocation to ‘the best candidates’, potentially significant risks to human rights are at stake. This includes not only impacts on the right to equality and non-discrimination, but also impacts on the right to work and the right to an adequate standard of living if some individuals discriminated against by the system and therefore are less likely to get jobs in the future.

For an overview of other typical human rights risks, compare the UN General Assembly Human Rights Council Advisory Committee report ‘Possible impacts, opportunities and challenges of new and emerging digital technologies with regard to the promotion and protection of human rights as well as, for example, the corresponding publications by the OHCHR or the Danish Institute for Human Rights. The following examples underscore the need for gender-responsive and inclusive approaches in the design, implementation, and monitoring of digital solutions in international development cooperation:

Gender Bias in AI Algorithms:

An AI-powered job recruitment platform used in an international development project inadvertently perpetuates gender bias by favoring male candidates over equally qualified female candidates. This exacerbates gender inequality in employment opportunities, hindering efforts to promote gender equality and women’s empowerment.

Exclusion of Persons with Disabilities from Digital Access:

A digital literacy program implemented in an international development project fails to consider the needs of persons with disabilities, such as providing accessible formats or adaptive technologies. This exclusion further marginalizes persons with disabilities, denying them access to vital information and opportunities for social and economic participation.

Child Exploitation and Online Safety Risks:

A digital education initiative introduces internet-enabled devices and online learning platforms in schools without adequate safeguards to protect children from online predators and harmful content. This exposes children to risks of cyberbullying, exploitation, and inappropriate material, undermining their safety and well-being.

Cultural Marginalization of Indigenous Peoples:

A digital land registry project implemented in indigenous territories by international development cooperation fails to recognize and respect indigenous land tenure systems and customary rights. This marginalizes indigenous communities, leading to land disputes, displacement, and loss of cultural heritage and identity.

LGBTI+ Discrimination in Digital Spaces:

A social media platform used in an international development project becomes a breeding ground for hate speech and discrimination against LGBTI+ individuals due to inadequate moderation and enforcement of community guidelines. This perpetuates stigma, harassment, and violence against LGBTI+ people, undermining their right to freedom of expression and safety online.

Privacy Violations and Gender-Based Violence:

A digital health information system implemented in a humanitarian context inadvertently exposes sensitive health data of survivors of gender-based violence due to lax data security measures. This compromises survivors’ privacy and safety, deterring them from seeking essential healthcare and support services.

Digital Divide Reinforcing Gender Inequality:

A government-led initiative to provide free internet access in rural areas of a developing country inadvertently widens the digital gender gap by failing to address barriers such as affordability, digital literacy, and social norms that limit women’s access to and use of digital technologies. This perpetuates inequalities in education, employment, and civic participation, hindering progress towards gender equality.

General Recommendation according to the Risk Analysis Scoring of your answers
Recommendation

Based on your answers on the risk analysis questions, there do not seem to be high risks. However, you may still need to pay extra attention to the application and use of the digital component, solutions or tools. Slight changes to the initial plan or context may pose significant risks. Further sections around participation, inclusion and stakeholder engagement remain highly relevant even as potential risks seem to be low.
If in doubt, consult with experts or read the guidance on how to conduct a human rights impact assessment (for example, 'Guidance for HRIA of digital activities' of the Danish Institute for Human Rights, internet link: https://www.humanrights.dk/publications/human-rights-impact-assessment-digital-activities)

General Recommendation according to the Risk Analysis Scoring of your answers
Recommendation

Based on your answers on the risk analysis questions, there do not seem to be high risks overall. However, or you didn't know at least some of the answers or you identified at least one risk as being high (e.g. country context and/or sector) which means that you will need to pay extra attention to the application and use of the digital component, solutions or tools. Slight changes to the initial plan may pose significant risks.
If in doubt, consult with experts or read the guidance on how to conduct a human rights impact assessment (for example, 'Guidance for HRIA of digital activities' of the Danish Institute for Human Rights, internet link: https://www.humanrights.dk/publications/human-rights-impact-assessment-digital-activities)

General Recommendation according to the Risk Analysis Scoring of your answers
Recommendation

Based on your answers on the risk analysis questions, either you did not know any of the answers or you already identified at least two high project risk or three high context risk factors. We strongly recommend you to conduct a risk assessment that takes on the recommendations and risks identified in this digital rights check into consideration and that includes a privacy assessment in case data is involved that can be linked to persons. Heightened attention will need to be paid to all kinds of data processing, AI applications etc., particularly with regard to groups that are vulnerable or marginalized in the specific context. Any country context analysis performed during the project design phase should be updated with the specific digital products or services and their (potential) users in mind.
Conduct a human rights impact assessment which deals with the following aspects (see the Guidance for HRIA of the Danish Institute for Human Rights):
- Meaningful participation of actually or potentially affected rightsholders.
- Inclusive and gender-sensitive engagement and consultation processes.
- Capacity building of individuals and groups at risk of vulnerability or marginalisation.
- Transparent character of the impact assessment process to adequately engage affected or potentially affected rightsholders.
- The impact assessment team is supported by human rights expertise.
- Human rights standards constitute the benchmark for the impact assessment. Impact analysis, assessment of impact severity and design of mitigation measures are guided by international human rights standards and principles.
- The assessment identifies actual and potential human rights impacts that the technology caused or contributed to-, and impacts that are directly linked to the technology.
- Impacts are addressed according to the severity of their human rights consequences.
- All human rights impacts seem to be addressed. Where it is necessary to prioritise actions to address impacts, severity of human rights impacts is the core criterion. Addressing identified impacts follows the mitigation hierarchy of ‘avoid-reduce-restore-remediate’.
- Impacted rightsholders have avenues whereby they can raise grievances regarding the digital project, products or services, as well as the impact assessment process and outcomes.

Were intended users and/or other rightsholders (Project-Affected People, PAP) involved in the design of the digital solution/s or tool/s?

According to the Principles for Digital Development endorsed a.o. by KfW (see Resources), to design with people means to invite those who will use or be affected by a given technology policy, solution, or system to lead or otherwise meaningfully participate in the design of those initiatives. PAP means Project-Affected People.

Comment - Yes

Great! Request their feedback also for updates, expansions, and quality checks. Establish inclusive avenues for feedback and redressal that are regularly monitored.

Potentially impacted human rights and principles - No

Right to meaningful participation

Recommendation - No

Involve them at least now or in updates of the solution/s or tool/s. Establish inclusive avenues for feedback and redressal that are regularly monitored.

Potentially impacted human rights and principles - I don't know

Right to meaningful participation

Recommendation - I don't know

Check their involvement and if the answer is NO, involve them at least now or in updates of the solution/s or tool/s. If the answer is YES, request their feedback also for updates, expansions, and quality checks. Establish inclusive avenues for feedback and redressal that are regularly monitored.

Three (fictitious, constructed) cases show that digital design without any prior involvement of the intended users may result in weak products, poor accessibility of the intended services and even – partially – human rights violations:

Case 1: Microfinance Digital Platforms Excluding Vulnerable Populations

An international financial development cooperation program introduced a digital platform for microfinance in a rural region, aimed at providing small loans to underserved populations to boost local entrepreneurship and economic development. – Consequences:

  1. Exclusion of Technologically Illiterate Users: The platform was designed without input from the local population, many of whom had limited experience with digital technology. As a result, a significant portion of the target group, especially elderly and less educated individuals, were unable to access the microloans.
  2. Data Privacy Violations: The platform collected extensive personal and financial data without implementing adequate data protection measures. Users were not informed about how their data would be used or protected, leading to privacy violations and potential misuse of their information.
  3. Unintended Discrimination: The algorithm used to assess loan eligibility was biased towards individuals with higher digital footprints, inadvertently discriminating against women and minority groups who had less online presence and digital activity.

Case 2: Mobile Payment Systems Exacerbating Gender Inequality

A digital mobile payment system was implemented to facilitate financial transactions in a developing country, with the intention of promoting financial inclusion and reducing cash dependency. – Consequences:

  1. Lack of Gender Considerations: Women, particularly in rural and conservative areas, were not consulted during the design phase. As a result, the mobile payment system did not account for the cultural and social barriers that women faced in accessing and using mobile phones, such as restrictions on phone ownership or usage.
  2. Security Concerns: The system lacked robust security features, leading to a rise in digital fraud and theft. Women, who often had less experience with digital technology, were disproportionately affected by these security breaches, losing money and trust in the system.
  3. Exacerbation of Economic Inequalities: Men, who generally had more access to mobile phones and digital literacy training, were able to adopt the mobile payment system more quickly and benefit from it. This widened the economic gap between men and women, undermining the program’s goal of financial inclusion.

Case 3: Digital Credit Scoring Undermining Trust and Transparency

A digital credit scoring tool was introduced to streamline the process of assessing creditworthiness for small business loans, aiming to improve access to credit for small and medium-sized enterprises (SMEs) in an emerging market. – Consequences:

  1. Lack of Transparency: SME owners and other stakeholders were not involved in the development of the credit scoring algorithm. Consequently, the criteria used for credit assessments were opaque, leading to a lack of trust in the system. Business owners did not understand why they were denied credit, which led to frustration and distrust in financial institutions.
  2. Bias in Algorithm: The algorithm was biased towards businesses with more digital transactions and online presence. Small, traditional businesses, often run by minorities and lower-income individuals who relied on cash transactions, were unfairly rated as less creditworthy, limiting their access to necessary funding.
  3. Exclusion of Informal Sector: The tool did not account for the realities of the informal sector, which constitutes a significant portion of the economy in many developing countries. Informal businesses, often run by the poorest and most marginalized individuals, were excluded from accessing credit, perpetuating cycles of poverty and economic marginalization.

Do the intended users and/or other rights holders (project-affected people) belong to disadvantaged people and marginalized / vulnerable groups?

Particular attention must be paid to vulnerable/marginalized individuals/groups, socially or economically disadvantaged people or people with special needs, as they are at a higher risk of being unable to anticipate, cope with, resist and recover from project-related risks and adverse impacts. These depend on the context and may include, for example, children, women, religious or ethnic minorities, persons with disabilities, indigenous people and poorer groups of people like migrants or refugees.

Potentially impacted human rights and principles - Yes

Right to non-discrimination, right to inclusion

Recommendation - Yes

Particular attention must be paid to vulnerable/marginalized individuals/groups, socially or economically disadvantaged people or people with special needs, as they are at a higher risk of being unable to anticipate, cope with, resist and recover from project-related risks and adverse impacts.

You should always make an individual assessment of vulnerabilities / marginalization based on the local realities, possibly referring to other analyses as, for example, the target group and impacted people analysis (ZGBA) including the gender analysis.

Comment - Yes

Seeing as vulnerable or marginalized groups may be impacted by the project, extra care must be taken to ensure that the vulnerable or marginalized groups are not adversely impacted and, if necessary, empowered to participate and make use of their rights. Ensure that those groups and/or their legitimate representatives are consulted and engaged with in order to be able to assess any negative impacts particularly concerning the groups in question. 

Comment - No

In most scenarios this might imply that human rights risks are lower. However, make sure to consider potential discrimination risks related to vulnerable or marginalized groups not being part of the intended users, and screen the group of users as regards the availability, accessibility, affordability, acceptability and quality of public services (partially reflected by the Principles for Digital Development) and their ability to make use of their rights and entitlements. Make sure to specifically consider whether those purposefully not engaging in the project belong to vulnerable or marginalized groups. 

Potentially impacted human rights and principles - I don't know

Right to non-discrimination, right to inclusion

Recommendation - I don't know

Consult the further resources, and consider whether or not vulnerable or marginalized groups may be impacted by the project. Consider both direct users, but also those that might otherwise be impacted. Screen the group of users as regards the availability, accessibility, affordability, acceptability and quality of public services and their ability to make use of their rights and entitlements. 

Comment - I don't know

Consider whether vulnerable or marginalized groups may in fact be impacted. This can also mean that they are excluded from a service. 

Consider the following example: a country introduces an e-registration system for job-seekers. However, migrant workers tend to not register. While the migrant workers are not as such users of the service, they are still potentially impacted by the move to an e-registration system. 

Relevant Data sources:

Relevant strategies, standards and guidance notes:

Relevant international conventions:

As an example, while a digitalization project to improve purely internal administrative processes within a government agency does not seem to involve marginalized groups, it might if the processes affect decision-making of importance to individuals (e.g. decisions about social security payments).

For illustration, we would like to draw your attention to some typical and generic examples of digital solutions or tools used or applied in a way that significant negative human rights impacts for disadvantaged groups of people have occurred:

Inaccessible Education Platforms for Persons with Disabilities:

A digital learning platform supported by international development cooperation is not designed to be accessible to persons with disabilities. As a result, students with visual impairments or mobility limitations are unable to fully participate in online courses, perpetuating educational inequalities.

Discriminatory Employment Screening Algorithms:

A recruitment tool funded by international development cooperation uses algorithms that unintentionally discriminate against women and ethnic minorities, favoring candidates with characteristics deemed more ‘mainstream’. This exacerbates existing disparities in employment opportunities and perpetuates systemic discrimination.

Exclusionary Digital Financial Services:

An international development project promotes the adoption of mobile banking services in a low-income community. However, the digital platform lacks features for non-literate users and those without smartphones, disproportionately excluding poor and marginalized individuals from accessing financial services.

Surveillance of LGBTI+ Activists:

A digital communication platform supported by international development cooperation for civil society engagement inadvertently compromises the safety of LGBTI+ activists. Weak encryption protocols and inadequate data protection measures expose activists to surveillance, harassment, and violence, infringing on their right to privacy and freedom of expression.

Cultural Insensitivity in Health Apps:

An international development agency develops a health app aimed at promoting maternal and child health in indigenous communities. However, the app fails to consider cultural practices and beliefs, leading to distrust and underutilization among indigenous women, exacerbating maternal and child health disparities.

Child Exploitation on Social Media Platforms:

A social media platform supported by international development cooperation becomes a breeding ground for child exploitation and trafficking due to lax moderation policies and enforcement. Children, especially those from marginalized communities, are targeted by predators, leading to severe harm and trauma.

Biased Facial Recognition Systems:

International development cooperation funds the implementation of facial recognition technology for border control in a country with a diverse population. However, the system exhibits racial bias, disproportionately misidentifying individuals from ethnic minorities, leading to wrongful detentions and discrimination.

These examples underscore the importance of ensuring that digital solutions and tools promoted by international development cooperation are designed and implemented in a manner that upholds human rights principles, fosters inclusivity, and avoids perpetuating discrimination and inequality.

Which of the following groups may be impacted by the solutions/ or tool/s?

This question is a follow up to the previous one. The list is not exhaustive and there may, depending on the context, be other vulnerable, disadvantaged or marginalized groups that have been identified as potentially impacted by the project. If so, please select the "other" option and write down the vulnerable/marginalized group you have identified.

Recommendation - Women and girls

Conduct rights holder engagement, and consider ‘women and girls’ specifically, including:

● Consult women/girls separately in a gender-sensitive and child-friendly manner

● Include female team member in human rights impact assessment team

● Include human rights impact assessment team members with knowledge of the particular rights and experiences of women and girls, particularly in relation to digital projects, products and services

● Exclude male team members from certain interviews

● Provide safe and comfortable space for interviews and confidentiality

● Include particularly marginalized sub-groups (e.g. female human rights defenders, teenage girls, etc.)

● Consider proactive and innovative approaches to lower the barrier for women – especially mothers – to engage (e.g. providing childcare during meetings) 

Consider a potential digital divide excluding women and girls from using the digital solution/s or tool/s and how to overcome it. 

Recommendation - Children and young persons

Conduct rights holder engagement, and consider ‘children and young persons’ specifically, including:

● Conduct consultation with children of different genders in coordination with child participation experts to facilitate participation respecting ethical standards

● Design the process so it is accessible, inclusive and meaningful for children

● Ensure voluntary participation in child-friendly environment

● Ensure that engagement will not do any harm to children and young persons

● Conduct consultations both with and about children and young people of different genders

● Consider engagement with parents and caregivers, teachers, community leaders, youth organisations and others with children’s best interests in mind. Consider separate interviews with mothers and/or youth/girls’ representatives, if they cannot speak openly in mixed groups. 

Consider a potential digital divide excluding children and youth – especially girls – from using the digital solution/s or tool/s and how to overcome it. 

Recommendation - Persons with disabilities

Conduct rights holder engagement, and consider ‘persons with disabilities’ specifically, including:

● When engaging with persons with particular physical or psychological disabilities, ensure that the location for meetings and/or the way of engaging is accessible and measures are taken to make engagement meaningful (e.g. ensuring sign language interpretation, information available in braille). 

Consider engagement with (local) representatives of people with disabilities, e.g. respective civil society organizations. 

Apply the rules for barrier-free digital services of information and ask how they would have to be applied in a given context 

Consider a potential digital divide excluding persons with disabilities from using the digital solution/s or tool/s and how to overcome it. 

Recommendation - Indigenous peoples

Conduct rights holder engagement, and consider ‘indigenous peoples’ specifically, including:

● Include human rights impact assessment team members with knowledge of indigenous peoples’ rights and local context, including gender-specific context and digital divide;

● Respect indigenous representative institutions, be sure to understand the cultural and organisational characteristics of indigenous peoples and hierarchy of authorities in order to engage with the right people in the right order and manner

● Use appropriate language for the context

● For projects targeting or otherwise impacting indigenous peoples, ensure that para and per-indigenous methodologies (i.e. “with” and “by” indigenous peoples) are the basis for their development, when possible. 

● There is a risk of imposing unwanted processes or structures upon indigenous recipients. By analogy, follow the principles of ‘Free, Prior and Informed Consent’ in addition to the Principles for Digital Development.

Recommendation - Minorities (national, racial, ethnic, religious or political)

Conduct rights holder engagement, and consider ‘minorities (national, racial, ethnic, religious or political)’ specifically, including

● Minorities may speak another language than the national language; engagement with minority groups of different genders should be conducted in a language they understand and feel most comfortable communicating in

● Engagement should be culturally and gender-appropriate

●Given the different characteristics of specific minority groups, it can be useful to include an anthropologist in the team who has expertise in engaging with the minority group in question

● Ensure wide participation from within the minority community during engagement rather than only dealing with select community leaders who may not represent the community as a whole, in particular concerning gender.

Consider a potential digital divide excluding minorities from using the digital solution/s or tool/s and how to overcome it. 

Recommendation - Older persons

Conduct rights holder engagement, and consider ‘older persons’ specifically, including:

● When engaging with older persons of different genders, ensure that the location for the meetings and mode of engaging is accessible, bearing in mind the greater likelihood of particular needs (e.g. wheelchair-friendly access and simple and user-friendly digital solution/s or tool/s) 

Consider a potential digital divide excluding older people – especially older women – from using the digital solution/s or tool/s and how to overcome it. 

Recommendation - Migrants, refugees, stateless and (internally) displaced persons

Conduct rights holder engagement, and consider ‘migrants, refugees, stateless and displaced persons’ specifically, including: 

● Due to their insecure legal status, individuals belonging to this rights holder group, especially those without a residence permit, may be hesitant to speak openly, fearing that they may face repercussions; it is important to provide a safe space when engaging with migrants, refugees, stateless and/or displaced persons of different genders.

● While for engagement with rights holders is in general imperative to keep identities of interviewees confidential, for this group confidentiality requires extra special attention.

● Consider remote or virtual engagement via encrypted communication channels to protect their safety. While doing this, consider a potential digital divide within this group and how to overcome it.  

Recommendation - Workers and trade unions

Conduct rights holder engagement, and consider ‘workers and trade unions’ specifically:

● Make sure to meet different categories of workers and trade union leaders (e.g. by gender, position, unionised vs. non-unionised, etc.)

● Include ‘informal workers’ of different genders in human rights impact assessment

● Fix a time that suits their work schedules

● Consider interviewing workers outside of company premises and outside working hours. 

Recommendation - Lesbian, gay, bisexual, transgender, intersex and queer (LGBTIQ+) persons

Conduct rights holder engagement, and consider lesbian, gay, bisexual, trans, intersex and queer (LGBTIQ+) persons specifically, including:

● Assessors should be appropriately trained on LGBTIQ+ issues in the respective country/region when engaging with them

● Ensure that LGBTIQ+ persons feel comfortable to provide information by ensuring that the collected data remains confidential

Be aware of the fact that data will not be sufficiently available on various gender identities, depending on the context and hesitation to ask for official status of a gender minority

● Consider the possibility of anonymised forms of engagement and inclusive approaches not ignoring or leaving behind the (sometimes hidden) sexual minorities

● When designing engagement plans ensure that the communities concerned are represented in their full diversity. For this purpose, contact more than one civil society organization and/or interest group. 

Comment - None, e.g. because their data is not processed

For example, people from indigenous groups are collecting and analysing data on wildlife using sensors, cameras and AI, while their own personal data is not being collected.

Relevant Data sources:

Relevant strategies, standards and guidance notes:

Relevant international conventions:

 

Two short case studies illustrate the value of stakeholder engagement and inclusion of vulnerable or marginalized groups of people:

Case 1: Digital Health Platform Excluding Women and Girls with Disabilities

An international development project introduced a digital health platform to provide remote healthcare services in a developing country. The platform aimed to offer telemedicine consultations, health information, and appointment scheduling, especially targeting rural areas with limited access to healthcare.

Women and girls with disabilities faced significant barriers in using the platform:

  1. Accessibility Challenges: The platform was not designed with accessibility features such as screen readers, voice commands, or simplified navigation, making it difficult for users with visual, auditory, and cognitive impairments to use the services.
  2. Lack of Inclusive Communication: Health information was presented in text-heavy formats without alternative formats such as audio or sign language videos, excluding those with learning disabilities or hearing impairments.

Measures for Improvement:

  1. Design with People (People-Centered or Human-Centered Design)
    • Involve Disabled Women and Girls: Engage women and girls with various disabilities in the design process through focus groups, interviews, and usability testing. This ensures that their needs and preferences are directly incorporated into the platform’s features.
    • Accessibility Experts Consultation: Work with accessibility experts to integrate universal design principles, ensuring the platform is usable by people with diverse disabilities. Features could include voice navigation, high-contrast themes, and compatibility with assistive technologies.
  2. Stakeholder Engagement
    • Collaboration with Disability Advocacy Groups: Partner with local disability advocacy organizations to understand the unique challenges faced by women and girls with disabilities. These groups can provide valuable insights and feedback on the platform’s design and functionality.
    • Inclusive Training Programs: Develop training materials in multiple formats (e.g., audio, braille, sign language) and conduct workshops to educate women and girls with disabilities on how to effectively use the digital health platform.

Case 2: Digital Education Tool Excluding Girls with Disabilities

An international development initiative launched a digital education tool to enhance learning opportunities for children in remote and underserved areas. The tool provided interactive lessons, digital textbooks, and educational games.

Girls with disabilities faced significant barriers in accessing and benefiting from the tool:

  1. Limited Physical Accessibility: The tool did not support adaptive devices, making it challenging for girls with physical disabilities to interact with the software.
  2. Content Inaccessibility: Educational content was not available in accessible formats such as braille, sign language, or simplified text, limiting its usability for girls with visual, hearing, and cognitive disabilities.

Measures for Improvement:

  1. Design with People (User-Centered Design)
    • Inclusive Co-Design Workshops: Organize co-design workshops with girls with disabilities, their caregivers, and educators to collaboratively develop features that cater to their specific needs. This could involve creating adaptable interfaces and ensuring the content is accessible to all users.
    • Prototype Testing: Develop prototypes and conduct testing sessions with girls with disabilities to gather feedback and iterate on the design. This helps identify and address usability issues early in the development process.
  2. Stakeholder Engagement
    • Partnership with Inclusive Education Experts: Collaborate with experts in inclusive education to ensure that the digital education tool aligns with best practices for accessibility and inclusivity. These experts can help tailor the educational content to be more engaging and accessible for girls with disabilities.
    • Community Engagement: Engage with local communities, including parents, teachers, and disability advocates, to raise awareness about the tool and gather ongoing feedback. Community involvement ensures the tool remains relevant and effective in meeting the needs of girls with disabilities.

Conclusion

By incorporating people-centered design principles and engaging relevant stakeholders, international development projects can create digital solutions that are more inclusive, user-friendly, and sustainable. These measures ensure that women and girls with disabilities are not left behind and can fully benefit from the services provided, leading to improved outcomes and greater social inclusion.

Are the intended users or beneficiaries of the digital solution/s or tool/s individual persons?

The digital solution/s or tool/s can be meant for individual persons (e.g. a digital education platform for individual users) or it can be targeted at e.g. an organization (e.g. an AI model helping with administrative processes with no interaction with external individuals). If the digital solution/s or tool/s is/are meant for individuals there is increased need to focus on their ability to access and use it/them.

Comment - Yes

If the digital solution/s or tool/s is/are meant for individual persons, it will be important to make sure it is accessible.

Comment - No

In most scenarios this might imply that human rights risks are lower. However, make sure to consider potential discrimination risks related to accessibility nonetheless since institutions also have individual users.

Comment - I don't know

If it concerns e.g. online classrooms, track and tracing apps, chatbots in public service, tele-medicine platforms, digital communications platforms, there are likely to be individual persons as users or beneficiaries. Since you don’t know, the Check assumes that there may be individuals involved. 

In this fictitious case study, which we give for the purpose of illustration of the meaning of the question of accessibility, we deliver some of the most relevant aspects of accessibility in administrative reforms and digitalised public services.

In recent years, governments worldwide have been investing heavily in administrative reforms and the digitalisation of public services. While these changes aim to streamline operations, increase efficiency, and improve citizen satisfaction, they also present significant challenges and opportunities concerning accessibility. Here we explore how these digital reforms impact various vulnerable groups:

People with Disabilities

Challenges might be:

  • Visual Impairments: Difficulty in accessing text-heavy digital platforms that are not compatible with screen readers.
  • Hearing Impairments: Lack of sign language options and captions in multimedia content.
  • Motor Impairments: Challenges in navigating websites that require precise mouse control or fast responses.

Best Practices worth of deliberation:

  • Assistive Technologies: Integration of screen readers, voice recognition, and keyboard navigation to aid visually and physically impaired users.
  • Compliance with Accessibility Standards: Adhering to guidelines such as the Web Content Accessibility Guidelines (WCAG) to ensure websites and applications are accessible.
  • Inclusive Design: Involving people with disabilities in the design and testing of digital services to address specific needs and preferences.

Elderly Population

Challenges might be:

  • Technological Literacy: Many elderly individuals may lack familiarity with modern digital devices and online platforms.
  • Cognitive Impairments: Issues such as memory loss and reduced cognitive function can make navigating complex digital services difficult.

Best Practices worth of deliberation:

  • User-Friendly Interfaces: Simplifying navigation with clear, large icons and straightforward instructions can help improve usability.
  • Support Programs: Digital literacy workshops and personalized assistance through helpdesks.
  • Alternative Access: Maintaining traditional service channels (e.g., in-person and telephone services) to cater to those unable or unwilling to use digital platforms; maintaining physical locations where elderly individuals can receive help in person or via telephone.

Low-Income Families

Challenges might be:

  • Limited access to technology and internet connectivity.
  • Affordability: The cost of data plans and devices can be prohibitive.

Best Practices worth of deliberation:

  • Public Access Points: Establishing free internet access points in community centers, libraries, and other public spaces.
  • Subsidized Technology Programs: Providing low-cost or free devices and internet packages to eligible families.
  • Offline Solutions: Offering downloadable forms and information that can be accessed without an internet connection.

Non-native Speakers and Immigrants

Challenges might be:

  • Language Barriers: Difficulty understanding and navigating services that are not available in multiple languages.
  • Cultural Differences: Differences in understanding governmental processes and expectations.

Best Practices worth of deliberation:

  • Multilingual Support: Offering services in multiple languages and providing translation services.
  • Cultural Sensitivity Training: Training staff to understand and respect cultural differences, ensuring a more welcoming and effective service environment.
  • Simplified Language: Using plain language in all communications to ensure they are easily understood by individuals with varying levels of language proficiency.

Generally speaking, the following recommendations for accessible digital public services are worth deliberating:

  • Continuous Feedback Loop: Regularly gathering and acting on feedback from vulnerable users to improve services.
  • Cross-Sector Collaboration: Partnering with NGOs, tech companies, and community organizations to develop and implement accessibility solutions.
  • Sustainable Funding: Ensuring long-term financial support for accessibility initiatives.

Is/are the digital solution/s or tool/s accessible, particularly to vulnerable or marginalized groups?

Examples of factors affecting accessibility are:
- costs: is it affordable to most people (taking into account also cost of hardware as well as data tariffs)?
- language barriers
- (digital) literacy of users
- ICT infrastructure: is the necessary physical ICT-infrastructure (e.g. broadband access, stable internet connection) in place?
- discrimination: do certain societal groups, e.g. women and girls, face additional social or cultural barriers?
- awareness: is everyone of the intended user group aware of the product or service?
See further resources for additional considerations.

Comment - Yes, and it/they has/ve particularly been assessed.

Consider adopting a monitoring process to ensure its/their accessibility over time.

Potentially impacted human rights and principles - No

Right to meaningful participation, equality and freedom from discrimination

Recommendation - No

Work together with vulnerable/marginalized groups and/or their representatives to ensure that the tool/s, solution/s, product/s or service/s is/are accessible, in particular to vulnerable/marginalized groups. 

Potentially impacted human rights and principles - I don't know

Right to meaningful participation, equality and freedom from discrimination

Recommendation - I don't know

Engage with potentially vulnerable/marginalized groups that might not find the digital tool/s, solution/s, product/s or service/s accessible and hear their views, to better understand the overall accessibility. 

Here we give three examples illustrating the aspect of accessibility of digital solutions, especially in view of vulnerable people in disadvantaged situations:

Example 1: Digital Financial Services for Refugees

An international development project introduced a digital financial services platform aimed at providing refugees with access to banking, remittances, and mobile payments in a Middle Eastern country hosting a large refugee population. Issues encountered might be:

  1. Lack of Legal Documentation: Many refugees did not possess the necessary identification documents required to register for the digital financial services, excluding them from accessing the platform.
  2. Language Barriers: The platform was available only in the host country’s official language, which many refugees did not speak fluently, making it difficult for them to navigate and use the services.
  3. Limited Digital Literacy: Refugees, especially those who had been displaced for extended periods, often lacked the digital literacy skills needed to use the platform effectively.

The possible consequences might be:

  • Financial Exclusion: Refugees continued to rely on informal financial networks, which were often unreliable and exploitative.
  • Increased Vulnerability: Without access to formal financial services, refugees faced greater risks of poverty and exploitation.

Example 2: Digital Land Registration System for Indigenous People

A digital land registration system was implemented in a South American country to streamline land titling and reduce land conflicts. The project aimed to formalize land ownership and improve land management. Issues encountered might be:

  1. Cultural Incompatibility: The system did not take into account the communal and traditional land ownership practices of indigenous people, which did not align with the individual land titling model promoted by the digital system.
  2. Language and Literacy Barriers: The system was not available in indigenous languages, and many indigenous people had low literacy rates, preventing them from understanding and using the platform.
  3. Infrastructure Gaps: Indigenous communities in remote areas lacked access to the internet and digital devices, making it impossible for them to use the online registration system.

The possible consequences might be:

  • Exclusion from Land Rights: Indigenous people were unable to register their land, leading to legal uncertainties and vulnerability to land grabs.
  • Erosion of Traditional Practices: The push for individual land titling threatened to undermine communal land ownership and traditional ways of life.

Example 3: Telehealth Services for Older People in Remote Areas

An international development initiative launched a telehealth service to provide remote medical consultations and healthcare information to people living in remote areas of an African country, aiming to improve healthcare access. Issues encountered might be:

  1. Technological Barriers: Older people in remote areas often did not have access to smartphones or computers, nor did they have the skills to use such devices effectively.
  2. Internet Accessibility: Remote areas suffered from poor internet connectivity, making it difficult to access telehealth services reliably.
  3. Trust and Usability Issues: Older individuals were often skeptical of digital solutions and found the telehealth platform difficult to use due to complex navigation and lack of user-friendly interfaces tailored to their needs.

The possible consequences might be:

  • Healthcare Access Disparities: Older and socially weak individuals continued to face significant barriers in accessing healthcare, leading to worsened health outcomes.
  • Increased Isolation: The failure to provide accessible telehealth services exacerbated the isolation of older people, who were already marginalized due to their geographic and social circumstances.

Is the accessibility of the digital solution/s or tool/s monitored, or will there be a plan for monitoring?

Is there a process in place to monitor who the users are and who are not using the solution/s or tool/s, and whether there is a simple process to provide feedback around accessibility concerns a.o.?

Comment - Yes

Use and improve your monitoring process to improve accessibility over time. 

Potentially impacted human rights and principles - No

Right to meaningful participation, equality and freedom from discrimination

Recommendation - No

Work together with vulnerable/marginalized groups and/or their representatives to ensure their participation in the monitoring process. 

Potentially impacted human rights and principles - I don't know

Right to meaningful participation, equality and freedom from discrimination

Recommendation - I don't know

Work together with vulnerable/marginalized groups and/or their representatives to ensure their participation in the monitoring process. 

Have you or the Project Executing Agency (PEA) or the FC consultant specifically engaged stakeholders on the potential impacts of the digital solution/s or tool/s?

Stakeholders can both be internal (various project staff and functions) or external (including government actors and project partners involved in the development or use of the digital solution/s or tool/s, as well as civil society organisations, academic institutions and rightsholder groups). Engagement can take many forms, such as: focus groups, in-person or virtual interviews, mobile data collection or crowdsourcing, or public hearings.

Potentially impacted human rights and principles - Yes, internal stakeholders.

Right to an effective remedy and right to meaningful participation

Recommendation - Yes, internal stakeholders.

Engage with external stakeholders as soon as the initial internal analysis of potential negative human rights impacts has been made. Focus specifically on vulnerable/marginalized groups previously identified. 

Comment - Yes, internal stakeholders.

Engaging with internal stakeholders is important in order for everyone to have a common understanding of the issues, to validate internal analyses, and to understand potential issues that were not previously identified. 

Potentially impacted human rights and principles - No

Right to an effective remedy and right to meaningful participation

Recommendation - No

Engage with both internal and external stakeholders on the topic of potential negative human rights impacts. Focus specifically on vulnerable/marginalized groups previously identified. 

Comment - No

Engaging with internal stakeholders on the topic of potential unintended negative human rights impacts is important in order to build internal capacity and ownership of the management of risks. Engaging with external stakeholders is important to validate your findings and analysis with the stakeholders that might have further insights to potential impacts, including the rights holders that might be impacted. 

Potentially impacted human rights and principles - I don't know

Right to meaningful participation and right to access to information

Recommendation - I don't know

Consult with the Project Executing Agency (PEA), its software developer, the FC consultant or others involved in the development and use of the digital solution/s or tool/s and see whether there has been any engagement with internal and external stakeholders on potential negative human rights impacts. 

A comprehensive stakeholder consultation and engagement process is essential for the successful implementation of digital solutions, ensuring they meet the needs of all users, especially vulnerable and marginalized groups. It might be structured as follows:

1. Preparation and Planning

  • Define Objectives: Clearly articulate the goals of stakeholder engagement. This includes understanding the potential impacts of the digital solution and gathering input to improve design and implementation.
  • Identify Stakeholders: Create a comprehensive list of stakeholders, including intended users (e.g., refugees, indigenous people, older adults), government agencies, NGOs, community leaders, and subject matter experts.
  • Develop a Stakeholder Engagement Plan: Outline the methods of engagement, timelines, communication strategies, and resources needed. Ensure the plan includes specific actions to reach and involve marginalized groups.

2. Stakeholder Identification and Analysis

  • Mapping Stakeholders: Identify and categorize stakeholders based on their influence, interest, and impact on the project. Use tools like stakeholder matrices to prioritize engagement efforts.
  • Understanding Stakeholders: Conduct preliminary research to understand stakeholders’ perspectives, needs, and potential concerns. This may involve surveys, background research, and initial interviews.

3. Engagement and Consultation

  • Inclusive Consultation Methods: Use diverse and inclusive methods to engage stakeholders, ensuring accessibility. This could include:
    • Workshops and Focus Groups: Conduct in-person or virtual workshops and focus groups tailored to different stakeholder groups, ensuring accessibility for people with disabilities.
    • Surveys and Questionnaires: Distribute surveys and questionnaires in multiple languages and accessible formats to gather broad input.
    • Public Meetings and Forums: Hold public meetings in accessible locations and times to encourage wide participation.
    • Digital Platforms: Utilize online forums, social media, and dedicated websites for broader reach, ensuring these platforms are accessible.
  • Engage Vulnerable Groups: Make special efforts to engage vulnerable and marginalized groups through targeted outreach, partnerships with local organizations, and adapted engagement methods (e.g., using local languages, providing transportation).

4. Documentation and Analysis

  • Record Feedback: Document all feedback received during the consultation process. Use tools like transcripts, meeting minutes, and digital recordings.
  • Analyze Input: Analyze the feedback to identify common themes, concerns, and suggestions. Ensure that the analysis includes input from marginalized groups to highlight specific issues they face.

5. Integration and Action

  • Incorporate Feedback: Integrate the analyzed feedback into the project design and implementation plans. Ensure that changes address the specific needs and concerns of all stakeholder groups, especially the marginalized.
  • Communicate Outcomes: Share the results of the consultation process with stakeholders, explaining how their input was used and what changes were made. This builds trust and accountability.

6. Implementation and Capacity Building

  • Training and Support: Provide training and capacity-building activities for stakeholders, especially public administration staff, to ensure they can effectively use and support the digital solution.
  • Technical Assistance: Offer ongoing technical assistance to address any issues that arise during the implementation phase.

7. Monitoring and Evaluation

  • Develop Monitoring Framework: Create a framework for ongoing monitoring and evaluation of the digital solution’s performance and impact. This should include:
    • Key Performance Indicators (KPIs): Define KPIs to measure the quality of services, user satisfaction, accessibility, and impact on vulnerable groups.
    • Regular Reporting: Establish regular reporting mechanisms to track progress and identify issues.
  • Feedback Loops: Implement mechanisms for continuous feedback from users, including periodic surveys, user forums, and suggestion boxes.
  • Independent Audits and Reviews: Conduct independent audits (e. g. so-called ‘social audits’) and expert reviews to assess the quality and impact of the digital solution. This ensures an objective evaluation and accountability.

8. Adaptive Management

  • Respond to Feedback: Use the data and feedback from monitoring and evaluation to make necessary adjustments and improvements to the digital solution.
  • Engage Stakeholders Continuously: Maintain ongoing engagement with stakeholders to ensure the solution remains relevant and effective. This could involve regular updates, additional consultations, and continuous collaboration.

Have rights holders, including non-users, and/or their legitimate representatives been engaged?

Rights holders include any individual whose rights might be impacted - both intended users and non-users (Project-Affected People).

Comment - Yes, intended users and other rights holders (esp. potential and actual Project-Affected People)

That means you have considered one significant aspect of stakeholder engagement, namely getting to hear the perspectives and thoughts directly from potentially impacted rights holders. The important aspect is to ensure that engagement has been meaningful. 

Potentially impacted human rights and principles - Yes, intended users (target groups)

Right to an effective remedy and right to meaningful participation

Recommendation - Yes, intended users (target groups)

Develop a plan to engage with potentially impacted ‘non-users’ of the product/s or service/s (Project-Affected People).

Comment - Yes, intended users (target groups)

It is important that potentially impacted ‘non-users’ are also engaged during stakeholder engagement. Otherwise there is a risk that you are not aware of impacts related to e.g. those who might be impacted by the fact that they are not using the digital solution/s (e.g. those not ‘enrolled’ in a digital ID project).

Potentially impacted human rights and principles - No

Right to an effective remedy and right to meaningful participation

Recommendation - No

Develop a plan to engage with rights holders, including non-users and/or their legitimate representatives (Project-Affected People).

Comment - No

It is important that rights holders (users and others, i.e. Project-Affected People) are specifically engaged during stakeholder engagement. Otherwise, there is a risk that you miss significant impacts that other external stakeholders (who are not rights holders) are not aware of. 

Potentially impacted human rights and principles - I don't know

Right to an effective remedy and right to meaningful participation

Recommendation - I don't know

Consult with the Project Executing Agency (PEA), its software developer, the FC consultant or others involved in the development and use of the digital solution/s or tool/s and see whether engagement with rights holders (users and potentially affected non-users)  has taken place. 

Fictitious Stakeholder Engagement Story:

Story: A Right to Water – Azura’s Journey to Equitable Water Distribution

In the heart of the parched desert land of Azura, water scarcity was a daily battle. With limited resources and an outdated distribution system, the right to clean and accessible water was slipping from the grasp of many. The international cooperation project, “Water for All,” aimed to reform water quality and distribution using digital technology. This is the story of how the project, guided by the principle of “Leave No One Behind,” honored the rights of Azura’s most vulnerable communities and transformed their lives.

In the remote village of Zarah, Amina, a 70-year-old elder, had spent her entire life fetching water from distant wells. The community was tired of relying on expensive private water tanks, which often sold water at exorbitant prices. For Amina and her neighbors, the right to water meant access to affordable, reliable public water services. When the fancy ‘Water for All’ project team arrived, Amina was skeptical. She had seen many promises come and go, leaving her village still parched and struggling. Donor fatigue left many wondering about the usefulness of new projects.

The project team, led by Maya, a dedicated and empathetic development professional, knew that genuine change required listening to the voices of those like Amina. They began their journey by holding meetings with local government agencies, NGOs and local citizen initiatives to map out key stakeholders. But Maya insisted on going deeper, reaching out to the people whose lives were most affected by the water crisis.

Engaging the Community and Integrating Feedback

Workshops were organized across Azura, from bustling cities to isolated rural areas. In Zarah, the team set up a community workshop. They provided transportation for those in need and ensured the venue was accessible for the elderly and disabled. Amina was hesitant but decided to attend, driven by the hope of securing a better future for her grandchildren.

During the workshop, the project team used local languages and offered childcare services to encourage participation. Focus groups were held with women, indigenous people, and refugees, ensuring that everyone’s voice was heard. Amina, speaking on behalf of her village, expressed the community’s desire for reliable, affordable water from public services rather than expensive private tanks.

The feedback from these workshops was meticulously documented and analyzed. Maya and her team discovered that many marginalized groups faced similar challenges—lack of legal documentation, language barriers, and limited digital literacy. Importantly, they also found a unanimous demand for public water services that upheld the community’s right to water. In response, they redesigned the digital water management system to include features like multi-language support, simplified interfaces, and offline functionality for areas with poor internet access.

Pilot Projects, Monitoring and Continuous Improvement

Pilot projects were launched in various communities, including Zarah. For the first time, sensors and smart meters were installed in the village’s water distribution system. Amina and her neighbors were trained on how to use the new technology, with materials provided in their native language and accessible formats. The project team stayed in close contact, ensuring that any issues were quickly addressed.

A monitoring framework was established with Key Performance Indicators (KPIs) focused on equitable access and user satisfaction. The team set up hotlines and suggestion boxes, promoting them widely to ensure everyone knew how to provide feedback. Independent audits were conducted regularly to assess the system’s effectiveness.

In Zarah, Amina and her community began to see real changes. Water was now more reliably available, and the quality had improved. Most importantly, they were no longer forced to buy overpriced water from private tanks. The village elders, once skeptical, now praised the project’s responsiveness and dedication.

Social Audit and Corruption Challenges

To ensure transparency and accountability, a social audit was conducted to verify the quality of water and the inclusiveness of its distribution. The audit involved community members, local NGOs, and independent experts. During the audit, it was discovered that certain local officials were involved in corrupt practices, diverting water supplies to private vendors who then sold the water at inflated prices.

However brave citizens gathered evidence and presented their findings to the national anti-corruption agency by way of using the newly introduced complaints mechanism. It was a dangerous endeavour, but the community stood united, using a digital service which allowed for anonymous complaint. The anti-corruption agency, impressed by the evidence, launched an investigation. The corrupt officials were suspended, and new, transparent processes were put in place to ensure the fair distribution of water. The community was involved in monitoring these processes, and regular social audits were scheduled to maintain transparency.

Happy End

Maya’s team continued to engage with all stakeholders, providing regular updates and holding additional consultations. They maintained an open dialogue with communities, ensuring the digital water management system evolved based on their needs. The Water for All project succeeded not just because of its technology, but because it truly listened to and engaged with the people it aimed to help. The digital water management system had not only brought clean water to Zarah but had also empowered its people, proving that with genuine engagement and the principle of Leave No One Behind, even the most ambitious projects can create lasting, inclusive change. Amina’s village and many others in Azura could now access their right to water through reliable and transparent public services, ensuring that the basic human right to clean water was upheld for all, not just those who could afford it. Through the power of community engagement and the determination to fight corruption, Azura’s water distribution system became a model of fairness and sustainability.

Have they provided input on the potential impacts of the project resp. solution/s or tool/s?

This will be the case when the engagement consists not only of information sharing, but also obtains input from external stakeholders on the digital component/solution/s/tool/s and its/their potential impacts.

Comment - Yes

This is an important aspect of ensuring meaningful stakeholder engagement with regard to respecting human rights. 

Potentially impacted human rights and principles - No

Right to an effective remedy and right to meaningful participation

Recommendation - No

Develop a plan for engaging with rights holders, focusing on obtaining their input on human rights risks and impacts. 

Comment - No

It is important that any engagement with stakeholders, in general, and rights holders, in particular, is not one-way communication. Rather, it should be possible for rights holders or their legitimate representatives to input on the internal analysis as well as add their own perspective to topics that might not have been covered in that analysis. 

Potentially impacted human rights and principles - I don't know

Right to an effective remedy and right to meaningful participation

Recommendation - I don't know

Consult with the Project Executing Agency (PEA), its software developer, the FC consultant or others involved in the development and use of the digital solution or tool and see whether rights holders including women and other in this context potentially marginalized groups have had a chance to provide their own input. 

Have the issues raised by rights holders been addressed?

Consider what concerns were raised and assess whether any measures have been taken to prevent or mitigate those impacts.

Comment - Yes

That means you have considered one significant aspect of stakeholder engagement, namely to ensure that the engagement is meaningful and that it impacts the project implementation process as necessary. It is also important that you report back to the engaged stakeholders on what measures have been taken. 

Recommendation - No, but the process of addressing such issues is underway.

Report back to rights holders previously engaged on the progress of addressing identified potential human rights issues, and provide a preliminary timeline. 

Comment - No, but the process of addressing such issues is underway.

It is important that stakeholders, particularly vulnerable or marginalized rights holders, receive updates on the process so that they are able to assess whether the adequate adjustments have been made. If actions are delayed, you should still report back to stakeholders with preliminary timelines even if you are not able to report back on what exact actions will be taken. 

Potentially impacted human rights and principles - No, there are no such plans.

Right to an effective remedy and right to meaningful participation

Recommendation - No, there are no such plans.

Review the consultation material to see whether there are in fact adjustments and accommodations to be made to ensure that human rights impacts are adequately prevented, mitigated or otherwise addressed. 

Comment - No, there are no such plans.

In order to ensure that stakeholder engagement, particularly with vulnerable or marginalized groups of rights holders, is meaningful it is essential that potential and actual negative human rights are addressed. 

Potentially impacted human rights and principles - I don't know

Right to an effective remedy and right to meaningful participation

Recommendation - I don't know

Consult with the Project Executing Agency (PEA), its software developer, the FC consultant or others involved in the development and use of the digital solution/s or tool/s and see whether issues raised by rights holders have been addressed. 

Have you or the Project Executing Agency (PEA) reported publicly on the potential impacts, mitigation measures, stakeholder engagement and other processes related to the questions in this Digital Rights Check?

This can include specific reports on the digital component/solution/s/tool/s in question, but also larger reports that also include information about it, the potential impacts identified and other related activities. It is important to provide information that is relevant to external stakeholders, which is why it is important to cover all of the topics mentioned in the reporting.

Comment - Yes, on all of the topics.

It remains important to continue to update public communication as the project implementation and roll-out of the digital component/solution/tool progresses. A plan for further reporting and transparency should be developed, including how the information should reach the intended audience, which should include potentially impacted rights holders. Plan communication to rights holders so that the information is accessible to the various groups impacted by the project or component/solution/tool. 

Comment - Yes, on some of the mentioned topics.

It is positive that some reporting has taken place in order to ensure greater transparency, which improves accountability in relation to the project in general, and the digital component/solution/tool, in particular. However, it is important that everything from identified impacts to the effectiveness of mitigation measures is reported in order to increase the accountability. 

Comment - Not yet. It is underway.

It is important that stakeholders, particularly vulnerable or marginalized rights holders, receive information about the project implementation so that they are able to assess whether they agree with the impact analysis and whether appropriate actions have been taken. 

Potentially impacted human rights and principles - No, there are no such plans.

Right to meaningful participation and right to access to information

Recommendation - No, there are no such plans.

Work with your partners and develop a transparency and communication plan around the impacts identified, engagement with stakeholders and planned preventive and/or mitigating actions. 

Comment - No, there are no such plans.

Transparency is an important aspect of a rights-based approach to human rights impact assessments. 

Potentially impacted human rights and principles - I don't know

Right to meaningful participation and right to access to information

Recommendation - I don't know

Consult with the Project Executing Agency (PEA), its software developer, the FC consultant or others involved in the development and use of the digital component, solution or tool and see whether any public reporting has occurred. 

Human rights principles should guide development cooperation and programming. Among other things, this means that it is important to consider the principle of participation and inclusion in your project or in relation to the use of a digital solution or tool. Transparency and reporting is essential in order to fulfill the principle, as it can also strengthen rights holders’ capacity to claim their rights. For more on a human rights-based approach, see: 

What has not been reported on?

Consider which of the topics listed have not been reported on in any shape or form. To report can also include simple direct communication to all impacted individuals, which can be a simple effort if it only concerns a small group of people.

Recommendation - Potential impacts

Draw up a plan on how to increase the transparency efforts in relation to the identified potential impacts. 

Recommendation - Stakeholder engagement

Draw up a plan on how to increase the transparency efforts in relation to stakeholder engagement. 

Recommendation - Mitigation measures

Draw up a plan on how to increase the transparency efforts in relation to mitigation measures. In case your project contains a do-no-harm (KFG)-matrix, include the mitigation measures regarding the digital component, solution/s or tool/s in this matrix.  

Is there a mechanism in place to capture feedback, complaints or grievances by users and non-users of the digital solution/s or tool/s? 

It is important to have a mechanism for individuals (both users and non-users) to submit complaints or concerns about the digital component, solution/s or tool/s. This can be in the form of telephone hotlines, sms or chat services, other crowdsourcing tools, basic email accounts as well as physical mailboxes, among other things. In addition, projects need to appoint a data protection officer. Such mechanisms allow potential human rights risks and impacts to be detected early and help identify those whose rights have been adversely affected so that remedy can be provided. See resources and Glossary (see footer) for further information on grievance mechanisms.

Comment - Yes, for users and non-users

It is an essential aspect of respecting human rights to have a mechanism in place that can address grievances. Review the effectiveness criteria from the UN Guiding Principles on Business and Human Rights to see whether the existing grievance mechanism can be improved (see further resources). 

Potentially impacted human rights and principles - Yes, for users only

Right to an effective remedy and right to meaningful participation

Recommendation - Yes, for users only

Review the feedback/grievance mechanism and the communication around it, ensuring that potentially impacted non-users also can access it and obtain knowledge about it. 

Comment - Yes, for users only

It is important that non-users are also able to raise their complaints. This can for example be the case when those who have not registered in an e-registration project are not heard, while it may be their concerns that are of most interest to ensure that potential human rights impacts are avoided or addressed. 

Potentially impacted human rights and principles - Not yet. It is underway.

Right to an effective remedy and right to meaningful participation

Recommendation - Not yet. It is underway.

Structure a plan for when the feedback/grievance mechanism will be in place and ensure that all relevant rights holders including vulnerable/marginalized groups are able to access the mechanism and that they are made aware of its existence once it is in place. 

Potentially impacted human rights and principles - No, there are no such plans.

Right to an effective remedy and right to meaningful participation

Recommendation - No, there are no such plans.

Work with partners to develop a mechanism that is aligned with the effectiveness criteria outlined in the UN Guiding Principles on Business and Human Rights (see further resources). 

Comment - No, there are no such plans.

It is an essential aspect of respecting human rights to have a mechanism in place that can address grievances and concerns. 

Recommendation - I don't know

Consult with the Project Executing Agency (PEA), its software developer, the FC consultant or others involved in the development and use of the digital solution or tool and see whether any feedback/grievance mechanism exists, whether it is used, and by who. 

Story: Empowering Women in Country X – A Journey of Digital Innovation and Public Accountability

Introduction

In the country of X, women’s economic empowerment was more than a goal; it was a necessity. For years, women in remote villages, ethnic minorities, and socially marginalized groups struggled to access financial services. To address this, an international financial cooperation program introduced innovative digital tools designed for mobile money and savings accounts. The program also initiated public discussions about registries for mobile collaterals and created a digital feedback mechanism allowing anonymous complaints to local authorities. Despite facing significant corruption challenges, the program ultimately succeeded through public accountability, ensuring equal access to justice and economic opportunities for all women.

In the remote village of Iluzia, lived 32-year-old Kateryna, a mother of three and a member of the local ethnic minority. For Kateryna and many women like her, financial independence seemed like a distant dream. Access to banking services was nearly impossible, and the local economy was dominated by informal and often exploitative practices.

The international program, led by Sofia, a passionate advocate for women’s rights, introduced mobile money and digital savings accounts specifically designed to empower women. The program also implemented a digital feedback mechanism that allowed users to file anonymous complaints, ensuring their voices could be heard without fear of retribution.

Kateryna was among the first to attend the workshops organized by Sofia’s team. The workshops provided training on using mobile money, understanding savings accounts, and how to leverage these tools for small businesses. For the first time, Kateryna felt hopeful about her financial future.

The Corruption Challenge

Despite the program’s initial success, it soon became evident that local NGOs, in collaboration with some mayors, were corruptly diverting funds and resources meant for the women. Complaints filed through the digital feedback mechanism revealed instances of bribes and favoritism. These corrupt practices threatened to undermine the entire program.

Kateryna, who had begun to see tangible benefits from the program, was disheartened but determined. She and other women from her village documented the corruption and reached out to Sofia’s team through the digital feedback mechanism.

Sofia, aware that the courts in X were compromised and ineffective in dealing with corruption, turned to the power of public engagement and transparency. The team launched a campaign to bring the issue to the public eye, leveraging digital tools to share information widely. They organized public forums and engaged local media to highlight the corruption issues.

Civil society organizations, women’s groups, and local activists were brought into the fold. A coalition was formed to advocate for transparency and accountability. The digital tools, especially the feedback mechanism, were publicized extensively, encouraging more women to report any corruption they encountered.

The coalition used social media, community radio, and local newspapers to spread the word about the corrupt practices and the importance of reporting them. The widespread publicity made it increasingly difficult for corrupt officials to operate without scrutiny.

The Power of Public Accountability, Rebuilding Trust and Expanding the Program

As more women like Kateryna came forward with their stories, the pressure on corrupt officials mounted. The public process of stakeholder engagement and the transparency of the digital tools created a groundswell of support for the program’s goals. The corrupt practices were exposed, and those involved were shamed and isolated.

Local authorities, feeling the heat of public opinion, began to distance themselves from the corrupt practices. They started cooperating with the oversight committee established by Sofia’s team and civil society organizations. Regular public audits and community meetings were held to ensure the transparent use of funds.

With the corrupt elements being held accountable by public pressure, Sofia’s team redoubled their efforts to build trust with the community. They held public forums to discuss the changes and ensured continuous engagement with all stakeholders. The digital tools were updated to include even more robust features for transparency, such as real-time tracking of funds and projects.

Kateryna, who had been at the forefront of advocating for justice, was invited to be part of the oversight committee. She used her experience and newfound confidence to ensure the program remained fair and accessible.

The reformed program flourished. Women from all backgrounds, including those in remote areas and from ethnic minorities, were able to access financial services securely. The mobile money and savings accounts allowed them to save, invest in small businesses, and gain financial independence.

The digital feedback mechanism continued to play a crucial role, providing a platform for women to voice concerns and suggest improvements. Regular audits and transparent processes ensured that the funds were used effectively and reached those in need.

Conclusion

The story of Country X is a testament to the power of combining digital innovation with public accountability and civil society engagement. Through the persistent efforts of dedicated individuals like Kateryna and Sofia, the program overcame significant corruption challenges and succeeded in empowering women across the country. Today, women in Country X not only have access to financial services but also have a voice in their economic future, proving that with the right tools and a commitment to transparency and justice, meaningful change is possible.

Thanks for participating

This is your results page.

Below you find an overview of your project’s digital solution/s and/or tool/s based on the answers you provided, as well as a compilation of the recommendations that have been given throughout the assessment based on your choices. You will also be able to download a template of a human rights action plan populated with the same recommendations to help you devise next steps to address the potential human rights impacts mentioned in this Check.

As you may notice, the recommendations are framed for the one developing or using digital solutions or tools. If you have used the Digital Rights Check to consider the risks with digital solutions or tools that you are using or planning to use, you may immediately take action on the recommendations.

In the case that you have used the Digital Rights Check to focus on the development and/or use of digital solutions and/or tools of a Project Partner, e.g. a Project Executing Agency, Project Implementation Unit or a Service Provider / Consultant working on their behalf, you may use the recommendations as a guardrail on your way of preparing decision-taking whether to go ahead with the project and/or how to exercise leverage over the partner or service provider to take action in relation to identified risks. If you are in the screening and due diligence phase, you may for example consider using these recommendations as an orientation which might help you to decide a project risk categorization, to scope your own due diligence, to complement the standard questions in the due diligence questionnaire for project partners and/or prioritize these issues in the feasibility study, in the project appraisal and appraisal report, in the project’s results matrix/logframe (and, if applicable, in its do-no-harm matrix) as well as in the monitoring / reporting plan (if applicable, in the Remote Management, Monitoring and Verification RMMV concept/system) and/or in the Environmental and Social Action Plan. If you are in the implementation and monitoring phase, you may for example consider using these recommendations for inspiration, shedding light on additional aspects during progress reviews (and final inspections) when assessing the quality of the implementation of the Environmental and Social Action Plan, when collecting data from the project partners, stakeholders and rights holders in the monitoring questionnaire and/or when engaging these actors during site visits and remote site visits.

Please bear in mind that the assessment of the risk profile of your project and the recommendations given are based on your answers, and might be rendered inaccurate if there are specific features of your project affecting human rights risks that may not have been taken into account in your assessment. You are encouraged to take these results as a preliminary guidance and further consider the potential human rights risks and the appropriate action to take in light of the specific circumstances of your project.

Disclaimer

This Digital Rights Check does not provide legal or other advice, nor is it intended to provide or replace own legal advice or further assessments. Its sole objective is to point out certain problems, challenges and human rights risks which, in our experience, may frequently arise in the contexts concerned and ask for appropriate measures in line with the human-rights-based approach. The Digital Rights Check can only offer assistance to the user in relation to this objective. The user must take the appropriate measures based on his/her own assessment and is solely responsible for their use and implementation.

In any case, users should seek own (legal) advice where they consider it necessary and should not rely solely on the information from the Digital Rights Check or use the knowledge gained from it as the sole basis for decision-making. Any use of the Digital Rights Check is voluntary and solely at own risk. KfW, GIZ and DIHR accept no liability whatsoever for the Digital Rights Check, for any provided information / findings or for correctness and completeness of any external websites referred to in the Digital Rights Check.

In accordance with the general and non-comprehensive nature of the Digital Rights Check, any findings from the use of the Digital Rights Check must always be adapted to the specific needs and the specific design of the project in the respective context.

KfW, GIZ and DIHR cannot be held liable in connection with the use of the Digital Rights Check or for the findings, information contained therein or references to external sources. In particular, the use of the Digital Rights Check tool does not guarantee the success of the respective project or the fulfilment of contractual obligations.