
Breaking the bias - How to build fair and inclusive AI
Can we prevent AI from unintentionally discriminating based on human biases?
Artificial Intelligence (AI) already has a great impact on society, and this impact will only increase in the future. AI is used extensively in various fields such as chatbots, hiring processes, healthcare, personalised recommendations and facial recognition. AI has truly made our everyday lives more efficient and precise. However, AI has a problematic weakness; data bias. In this article we will explore what data bias is, what consequences it can have, and what strategies can be used to mitigate it.
How does AI work, and why is it inherently biased?
AI is an extraordinary tool for scanning enormous amounts of data and drawing conclusions from it in a short time. AI systems are trained based on datasets, and the quality of the data directly determines the accuracy of the AI tool.
Because of this, AI tools are never better than the data they were trained on. Depending on who decides what data to use, how that data was collected and where the data comes from, AI will have built in biases. Some examples of types of data biases:
Historical bias
This occurs when historical inequalities and prejudices are reflected in the data. For example, basing an AI hiring tool on previous hiring data will favour candidates similar to those previously hired. This means, for example, that AI hiring tools used in male-dominated businesses will continue to favour men, ultimately working against inclusivity and diversity.
Sampling bias
This bias arises when the chosen data only includes a very small and nonrepresentative part of the population. One example of this is facial recognition systems. Initially, such systems were predominantly trained on images of light-skinned people, causing the systems to not efficiently recognise darker-skinned faces.
Label bias
Before training an AI, the dataset often needs to be labelled for the system to understand the data. Labelling data is a manual, time-consuming and costly process. Therefore, labelled datasets are often smaller and lack diversity compared to reality. Bias is built into the dataset depending on what data is chosen and who labels it.
Algorithmic bias
This type of bias is due to the design of AI algorithms, causing them to favour specific outcomes. Favouring certain outcomes is not wrong in itself. However, algorithmic preferences will lead to an incorrect representation of reality.

What consequences can AI bias have?
AI isn't a perfectly objective dictionary; it's simply a reflection of human bias. AI bias will not only lead to AI tools being less trustworthy, but it can also have negative consequences for individuals, especially from marginalised groups. In other words, AI bias is an issue of inequality, and that is why it is such an important and urgent issue to solve.
Some examples of how AI bias negatively affects marginalised groups:
- Female healthcare
- Economic inequality
- Facial recognition
- Law enforcement
- Shadowbanning in social media
AI is widely used in healthcare and is often perceived to be objective. However, the data that these tools rely on comes from historical health records, treatment outcomes, and patient demographics, which can carry weaknesses such as underrepresentation of certain patient groups or gender biases. This can lead to misdiagnoses and incorrect treatments, having potentially catastrophic consequences.
For example, heart attacks are frequently misdiagnosed in women. Despite this, prediction models for cardiovascular disease are often trained on predominantly male datasets. Since cardiovascular disease manifests differently in men and women, algorithms trained mainly on male data is not accurate in diagnosing women. A consequence of this is that women aren’t given the proper healthcare.
Source: Addressing bias in big data and AI for health care: A call for open science

When used for financial services, the bias in AI systems causes a widening in the wealth gap. Biased algorithms, trained on historical lending data, used in for example credit scoring and mortgage applications, favour individuals from affluent neighbourhoods and disadvantage people from marginalised communities. This bias restricts access to credit and loans, making it impossible for people from marginalised groups to build wealth and break cycles of poverty.
Economic inequalities are also increased using AI systems in recruitment processes. A study performed on AI hiring systems at the University of Washington showed that resumes with traditionally white sounding names were favoured 85% of the time over black sounding names. The same trend was seen regarding gender. Male names were preferred 52% of the time compared to female names that were only favoured 9% of the time. Bias in AI hiring systems causes unequal opportunities for employment, ultimately contributing to an increase in economical inequalities.
Source: University of Washington

AI is widely used for facial recognition systems. However, these systems were trained predominantly using images of white male individuals. Studies show that facial recognition systems are less accurate for female and darker-skinned faces compared to white male faces, underscoring the need for caution.
AI bias in facial recognition systems can have serious consequences when correct identification of individuals with darker skin tones is impossible. Examples of this are missed flights and false arrests.
Source: The New York Times

Predictive policing is when AI is used to analyse large sets of historical crime data to predict criminal activities. AI can make policing decisions easier and more efficient, for example regarding in which areas to increase police resources and identify who is more likely to commit crimes.
However, since predictive policing is based on historical data there are high risks of reproducing prejudice and over-policing in marginalised areas. This will have negative consequences for non-white individuals and have social consequences such as a lack of trust in law enforcement in marginalised areas.
Predictive policing is already in use in the police departments of some larger US cities.
Source: Brennan Centre for Justice

Shadowbanning is when social media platforms secretly limit the visibility of a post or account. Unlike a regular ban, which blocks content and informs the user, shadowbanning reduces reach without the user's knowledge.
Large social media companies utilise AI tools to scan content for violent and sexual material. However, these AI tools have been shown to label female bodies as more sexually suggestive than male bodies, leading to the blocking of content featuring female bodies.
An analysis performed by The Guardian on hundreds of photos of men and women in underwear, exercising, and undergoing medical tests, revealed that AI tags images of women in everyday situations as more sexually suggestive than those of men. Consequently, these AI algorithms suppress the reach of many images featuring women’s bodies, negatively impacting female-led businesses and exacerbating societal disparities.
Source: The Guardian

How can we make AI less biased?
Now that we recognise that AI is not objective, but rather a mirror reflecting our societal biases, it’s clear we must rethink how we manage AI data. Only then can it truly contribute to meaningful and positive societal progress. Below are some methods and strategies to tackle AI bias.
Diversify your data sources
Data used for training AI models must be diverse and represent groups from all parts of society.
Perform continuous audits, monitoring and updating of AI systems
This is essential in order to spot data bias and fix it before it lowers the quality and usefulness of the AI, and potentially affects individuals from marginalised groups.
Diversify the development team and engage in interdisciplinary collaborations
Bringing different perspectives into the development and maintenance of AI is essential in the fight against data bias. A homogenous team will do homogenous thinking.
Transparency and understandability
Develop and use AI systems with implemented transparency and understandability. This will allow users to understand how the system makes decisions, ultimately helping to identify potential biases.
Ethical guidelines
When developing AI models, it is essential to follow ethical guidelines, since it will help in creating inclusive and accurate models.
Human oversight
Never leave your AI systems unsupervised. Keeping an eye on your AI will ensure that it is continuously aligned with ethical standards and help detecting and correcting any potential biases and errors.

What does AFRY do to limit AI bias in our solutions?
To build a future that’s inclusive, equal, and sustainable, we need to be aware of AI bias and tackle it head-on. At AFRY, we’re committed to educating our employees on this important issue.
Our teams work with shaping tomorrow’s products and services, both in AI and beyond. By embracing diversity and inclusivity in everything we do, we’re taking meaningful steps toward Making Future together.