WeeBytes
Start for free
AI Bias: When Models Learn the Wrong Lessons
BeginnerAI EthicsKnowledge

AI Bias: When Models Learn the Wrong Lessons

If your training data reflects historical inequalities, your model will too. AI bias isn't a bug — it's a feature of data that reflects our world.

AI bias occurs when a model produces systematically unfair or inaccurate results for certain groups.

**Famous examples:**

- Amazon's hiring tool downgraded CVs with 'women's' — trained on historical male-dominated hiring data

- Facial recognition with high error rates for darker skin tones — trained mostly on lighter skin images

- Predictive policing algorithms over-targeting minority neighbourhoods

**Sources of bias:**

- **Training data bias**: reflects historical inequalities

- **Label bias**: human labellers bring their own biases

- **Measurement bias**: different quality data for different groups

**Mitigations:** Diverse training data, fairness metrics, regular audits, diverse teams building the models.

ethicsai-safetyai-ethicsae

Want more like this?

WeeBytes delivers 25 cards like this every day — personalised to your interests.

Start learning for free