Crafting ethical AI: Addressing bias and challenges

Did you know that 27.1% of AI practitioners and 32.5% of AI tools’ end users don’t specifically address artificial intelligence’s biases and challenges? The technology is helping to improve industries like healthcare, where diagnoses can be improved through rapidly evolving technology. 

However, this raises ethical concerns about the potential for AI systems to be biased, threaten human rights, contribute to climate change, and more. In our Generative AI 2024 report, we set out to understand how businesses address these ethical AI issues by surveying practitioners and end users.

With the global AI market size forecast to be US$1.8tn by 2030 and AI being deeply intertwined with our lives, it’s vital to address potential issues. Ethical AI is developing and deploying systems to highlight accountability, transparency, and fairness for human values. 

Bar graph showing how practitioners and end users address bias and challenges

Bias can occur throughout the various stages of the AI pipeline, and one of the primary sources of this bias is data collection. Outputs are more likely to be biased if the data collected to train AI algorithms isn’t diverse or representative of minorities.

It’s also important to recognize other stages where bias can occur unconsciously, such as: 

  • Data labeling. Annotators can have different interpretations of the same labels.
  • Model training. The data collected must be balanced, and the model architecture capable of handling diverse inputs must be balanced, or the outputs could be biased.
  • Model deployment. The AI systems must be monitored and tested for bias before deployment.

As we increasingly utilize AI in society, there have been situations where bias has surfaced. In healthcare, for example, computer-aided diagnosis (CAD) systems have been proven to provide lower accuracy results for black female patients when compared to white female patients.

With Midjourney, academic research found that, when asked, the technology generated images of people in specialized professions as men looking older and women looking younger, which reinforces gendered bias. 

A few organizations in the criminal justice system are using AI tools to predict areas where a high incidence of crime is likely. As these tools can often rely solely on historical arrest data, this can reinforce any existing patterns of racial profiling, leading to an excessive targeting of minority communities. 





We’ve seen how bias can exist in AI, but that isn’t the only one it faces. AI can potentially improve business efficiency, but there are a few challenges to ensuring that the ethics of AI solutions are a key focus.

1. Security

AI can be susceptible to hacking; the Cybersecurity Infrastructure and Security Agency (CISA) mentions documented times when attacks have led to objects being hidden from security camera footage and autonomous vehicles acting poorly.

2. Misinformation

With the potential to cause severe reputational damage, it’s essential to curb the likelihood of AI tools spreading untrue facts by establishing proper steps when developing the technology. Misinformation can affect public opinion and spread the wrong information as if it is true.

3. Job displacement

AI can automate various work activities, freeing up valuable worker time. However, this could lead to job loss, with lower-wage workers needing to upskill or change careers. Creating ethical AI also includes making sure that tools complement jobs and not replace them.

4. Intellectual property

OpenAI had a lawsuit involving multiple famous writers who stated their platform, ChatGPT, illegally used their copyrighted work. The lawsuit claimed that AI exploits intellectual property, which can lead to authors being unable to make a living from their work.

5. Ethics and competition

With the constant need to innovate, companies may need to take more time to ensure their AI systems are designed to be ethically sound. Additionally, strong security measures must be in place to protect businesses and users.

We wanted to know how practitioners and end users of AI tools addressed biases and challenges, as companies need to be aware of steps that need to be taken when using this technology.

1. Regular audits and assessments

44.1% of practitioners and 31.1% of end users stated they addressed bias by regular auditing and assessing. This often includes a comprehensive evaluation of AI system algorithms, where the first step is to understand where bias is more likely to occur.

Following this, it’s vital to examine for unconscious bias, such as disparities in how AI systems handle age, ethnicity, gender, and other factors. Recognizing these issues allows businesses to create and implement strategies to minimize and remove biases for improved fairness. This could be changing the training data for AI models or proposing new documentation.

According to UNESCO, there are ten core principles to make sure ethical AI has a human-centered approach:

  1. Proportionality and do no harm. AI systems are to be used only when necessary, and risk assessments need to be done to avoid harmful outcomes from their use.
  2. Safety and security. Security and safety risks need to be avoided and addressed by AI actors.
  3. Right to privacy and data protection. Data protection frameworks need to be established alongside privacy.
  4. Multi-stakeholder and adaptive governance & collaboration. AI governance is essential; diverse stakeholders must participate, and companies must follow international law and national sovereignty regarding data use.
  5. Responsibility and accountability. Companies creating AI systems need to have mechanisms in place so these can be audited and traced.
  6. Transparency and explainability. AI systems need appropriate levels of explainability and transparency to ensure safety, security, and privacy.
  7. Human oversight and determination. AI systems can’t displace human accountability and responsibility.
  8. Sustainability. Assessments must be made to determine the impact AI systems have on sustainability. 
  9. Awareness and literacy. It’s vital to ensure an open and accessible education for the public about AI and data.
  10. Fitness and non-discrimination. To ensure AI can benefit all, fairness, social justice, and non-discrimination must be promoted.

28.6% of end users and 22% of practitioners rely on AI tool providers to follow appropriate ethical guidelines, so it’s essential that AI systems have ethical AI in all stages of development and deployment of their technology.

An introduction to ethical considerations in AI

Ethics involves the broader considerations of artificial intelligence (AI) and how it plays a role in society beyond the code.

3. Don’t specifically address

A substantial percentage of end users and practitioners, 32.5% and 27.1%, respectively, said they don’t specifically address biases when using AI tools. With this technology being widely used across various industries, not addressing concerns and challenges could lead to further issues.

In addition to data bias, privacy is a top concern; smart home software, for example, must have robust privacy settings to prevent hacking or tampering. Similarly, AI systems can often make decisions that have a profound impact—autonomous vehicles must keep everyone on the road safe, and ensuring that AI doesn’t make mistakes is essential.

When creating AI tools, it’s important to focus on all aspects—ethical AI is, perhaps, the most vital component, as it affects output and how various minorities, such as gender and ethnicity, may be treated in industries like healthcare and law.

Our Generative AI 2024 report offers a comprehensive overview of how practitioners and end users use AI tools and how the sentiment is on the ground. Trust is fundamental for AI technology, so make sure to get your copy to learn how much confidence users currently have.