‘On death row’ and how technology can help, or not, to catch the “bad guys”
It is the summer of 1994, the police find 3 lifeless bodies in a house in a quiet neighborhood in Miami. At the crime scene, there is also a video camera that contains the complete recording of the murder. With this, three weeks later they identify one of the alleged culprits: Pablo Ibar, a Spaniard who had been arrested for a crime related to weapons. With no more evidence than said blurred recording, Pablo is tried and sentenced to death. He comes to Movistar Series his story, told over these 25 years in which he has been trying to prove his innocence.
The case of Pablo Ibar was as much in the media in Florida as the crime of the Marqueses de Urquijo in Spain. However, Pablo’s story did not reach our country until a few years ago when Nacho Carretero, the journalist in charge of giving life to Fariñabecame interested in the subject and started an investigation That would take seven years.
Thanks to Nacho’s work, today we have the series ‘On death row’, with four episodes, which is complemented by five podcasts. The series is a creation of Bambú Producciones and Movistar+ and does not focus on proving whether Pablo is innocent or not, but on the insufficient evidence to convict him.
The only proof is a video recording lacking in sharpness
Through this track, they reached the Spaniard, arrested him and sentenced him without further delay to death row. At this point in the story is where we ask ourselves: Can technologies be trusted to incriminate a person? A blurry video recording was the one who blamed Pablo 25 years ago, today it is no longer the cameras but artificial intelligence who dresses up as a policeman to catch criminals.
In recent years, news such as A virtual assistant witnesses a murder have been read, but to what extent can this be real? The movie Minority Report warned in 2002: leaving everything in the hands of an algorithm is not recommended, especially when it comes to a person’s innocence. But why does this happen? According to expert consultants in this area, such as Oliver Wyman, using artificial intelligence to prevent banking crimes is very effective. However, it is difficult to ensure that it is also for blood crimes.
Other studies, like the article signed by several Boston University professors Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings, show that machine learning systems, or automatic learning, have sexist and social biases.
Likewise, the mathematician Cathy O’Neil affirms that algorithms are “opinions enclosed in mathematics”, so depending on who builds them, they will give one result or another. “We tend to think that algorithms are neutral, but they are not. The biases are structural and systemic, they have little to do with an individual decision”, comments Professor Virginia Eubanks, also author of the book Automating inequality, which reasons about how algorithms profile, control and punish less wealthy social classes.
At this point and having the series On Death Row as the protagonist, we ask ourselves: are technologies real helpers in catching criminals? Or maybe something more than a blurry recording or an algorithm is needed to blame a person?