There is one issue that has not received much attention in the CEE AI scene that builders of AI products should be aware of: AI bias. AI bias is a phenomenon where AI systems discriminate against certain individuals or groups. As the topic is hotly debated in the US and UK, if you wish to expand into these markets you should be keenly aware of (1) its causes and (2) how to ensure your solution meets the “Western” standards of screening against AI bias.
The stories of AI bias are usually very similar: An AI system is implemented to automate a generally human-made decision. The system is touted as fair and objective because “computers cannot discriminate.” However, the system is trained on data filled with implicit (or sometimes very explicit) biases and then replicates these biases in its decision-making. In the UK, this pattern emerged when an algorithm used to determine A-level exam grades disproportionately downgraded students from disadvantaged backgrounds based on previous results of the schools they attended, adversely affecting their chances of securing university admissions.
So what is behind AI bias?
It depends. It can be the training data that already contains biases or reflects societal prejudices which the model can perpetuate as seen above in the A-level exams example. The lack of diversity in the CEE market which can be reflected in your datasets used for training can amplify these biases. If the data is collected in a biased manner, it can also lead to biased outcomes. AI systems can be developed by programmers who may inadvertently introduce their own biases into the system or they use datasets which, for instance, reflect their own race or gender (see e.g. here an article on racial discrimination in face recognition technology). The programmers’ personal beliefs, values, or cultural biases can influence the design choices, feature selection, or algorithms used in the AI system, leading to biased outcomes. The algorithms used in AI systems can inadvertently perpetuate biases present in the data they are trained on. For example, if a hiring algorithm is trained on historical hiring data that reflects gender bias, the algorithm might learn to discriminate against certain genders while making predictions. Biases can also be reinforced through feedback loops in AI systems. If the system’s outputs are used to make decisions that further influence the training data or real-world outcomes, the biases present in the initial data can be amplified over time.
How can you ensure that your solution meets the standards?
- Use Diverse Datasets: Incorporate diverse and representative datasets from the markets where you plan to expand, ensuring that your AI system “understands” and accounts for the intricacies and nuances of different populations. For example, a dataset considered representative in the Czech Republic will almost certainly not pass as sufficiently racially diverse in the US.
- Identify Risks and Potential Damage: Thoroughly assess the risks associated with bias in your AI system and understand the potential harm it may cause to individuals or communities. Try to think about all “What could go wrong?” possibilities and design your quality testing around them.
- Build a Diverse Team: Foster a diverse team of experts working on your AI product to provide varied perspectives and mitigate bias during the development process. Diverse teams are much more likely to spot potential AI bias issues and might also help you discover unique business opportunities and use cases for your product.
- Be Transparent: Clearly communicate the capabilities and limitations of your AI model, including its accuracy, potential biases, and the safeguards necessary to interpret its results.
- Consider Algorithmic Audits: Conduct regular audits of your AI algorithms to ensure compliance, encourage investor confidence, and reinforce your commitment to fairness and accountability.
- Choose the Right AI Model: If you are working with third-party AI systems, select the appropriate AI model for your specific task, ensuring that it aligns with your ethical principles and desired outcomes. We are seeing the emergence of combining various third-party AI systems into a single solution to leverage the strengths of each system and work around their weaknesses.
- Synthetic Training Data: Consider using synthetic training data to augment your datasets, providing additional diversity and reducing bias.
Why it matters: If you do comply with these standards, you are building a solution to avoid significant reputational damage, including public backlash, negative media coverage, and loss of trust from users and customers for misuse of data when building. Compliance with ethical standards and biases in AI systems also allows a company to avoid fines, penalties, or even legal action.
How is AI bias treated in EU vs US regulation?
The European Union and the United States have taken divergent paths in their approaches to combating AI discrimination.
In the US, legislative efforts such as the Algorithmic Accountability Act and the AI in Government Act have aimed to enhance transparency and accountability in AI systems in specific industries, primarily financial services, and recruiting. However, comprehensive federal laws specifically addressing AI bias are yet to be enacted (stay tuned).
Conversely, the EU has proposed the AI Act and the AI Liability Directive, which seek to regulate AI systems as well as establish clear rules on liability for AI systems. Neither of the texts has yet been adopted and particularly the AI Act has faced scorn from major players in the AI field (see e.g. Microsoft’s opinion).
Stay up to date by subscribing to our newsletter below. We will be covering the developments in both the US and the EU in upcoming articles.