Imagine being locked up for 10 months because an AI thought you looked 55% like a criminal. That’s the nightmare Russian scientist Alexander Tsvetkov lived through. In February 2023, an AI facial recognition system wrongly matched his face to a sketch from a 2002 case, leading to his wrongful imprisonment despite alibi evidence and expert testimonies.
This chilling case unfolded in Russia, where law enforcement relied on flawed AI tech for suspect identification.
‘The AI system produced a false positive match with insufficient accuracy, leading to wrongful detention,’ reported the OECD AI Incidents database.
Public reaction has been one of outrage, with many on social media calling for stricter oversight of AI in criminal justice, fearing anyone could be next.
What if this tech keeps making mistakes? Could AI errors strip away our freedom without proper checks? This case is a wake-up call—human judgment must always have the final say over machines. Let’s talk about how we can prevent such injustices in the future.
Sources: OECD AI Incidents