AI Bias & Discrimination

Mauricio PreussValentina BravoAleksander Hougen

Written by Mauricio Preuss (CEO & Co-Founder) & Valentina Bravo (Managing Editor)

Reviewed by Aleksander Hougen (Chief Editor)

Last Updated:

Are we all equal in the eyes of AI - featured image

In February 2025, Trevis Williams was driving from Connecticut to Brooklyn. At the exact same time, miles away in Manhattan’s Union Square, a man flashed a woman.

Two months later, police showed up and arrested Williams for that crime.

The NYPD’s facial recognition system had flagged him as a match. The fact that Williams was eight inches taller and 70 pounds heavier than the man the victim described? Irrelevant. The cell phone data proving he was nowhere near the scene? Ignored.

The NYPD’s algorithm said it was him. That was enough.

Williams is Black. The actual suspect was Black. Both men had locks. Those were apparently the only similarities that mattered.

He spent two nights in jail. Prosecutors eventually dismissed the case after his public defenders proved the obvious: they had the wrong guy. But Williams’ application to become a correctional officer at Rikers Island got frozen anyway [1].

“I was so angry. I was stressed out,” Williams told Eyewitness News. “I hope people don’t have to sit in jail or prison for things that they didn’t do” [1].

When I first came across this story, I’ll admit I was angry. But I was also curious. How does something like this even happen? How does someone end up in jail when every piece of actual evidence says they weren’t there?

So down the rabbit hole I went. And what I found was sobering: Williams isn’t an anomaly.

He’s one example in a pattern that’s playing out across systems that touch your life every day. Algorithms are deciding who gets arrested, who gets hired, who qualifies for a loan, who receives proper medical care, and who gets admitted to college. And they’re getting these decisions wrong in ways that consistently hurt the same groups of people.

The deeper I dug, the more I realized this isn’t about a few bad algorithms that need fixing. The problem is baked into how these systems are designed, deployed, and trusted. And most people have no idea it’s happening to them.

So today, I’m taking you through what I learned. We’re going to look at how AI systems perpetuate inequality across criminal justice, employment, and healthcare. We’ll explore why “just fix the data” doesn’t solve the problem. We’ll examine what (if anything) regulators are doing about it. And I’ll give you concrete steps to protect yourself.

Because here’s what really got me: the algorithm was never neutral to begin with. We just pretended it was.

Why Algorithmic Discrimination Is Everywhere (And Getting Worse)

Here’s a fun statistic to kick things off: according to Stanford’s 2025 AI Index Report, AI-related incidents surged 56.4% in just one year. We’re talking 233 documented cases throughout 2024 alone [2].

🔒This is where the free preview ends.

Join thousands of professionals who get the complete story

Our Deep Dive subscribers rely on these investigations to stay ahead of emerging threats and make informed decisions about technology, security, and privacy.

✅ Complete access to this investigation
✅ All future Deep Dive reports
✅ Searchable archive of past investigations
✅ No ads, no sponsored content
✅ Cancel anytime

Monthly

EXCLUSIVE FIRST YEAR OFFER
$0.99
per month for the first 12 months

Annual MOST POPULAR

EXCLUSIVE FIRST YEAR OFFER
$9.99
for the first year

Already a subscriber? Sign in

↑ Top