Opinion | Can AI predict if you are about to kill your wife?

In the world depicted in Steven Spielberg’s 2002 film Minority Report, policemen apprehend criminals before they commit crimes, based on predictions made by psychics called “precogs”.

Minority Report was set in the year 2054. But we may be getting there faster than we thought. New Scientist reveals that police in the UK want to predict serious violent crime using artificial intelligence (AI). Individuals flagged by the system will be offered interventions, such as counselling, to avert potential criminal behaviour.

The National Data Analytics Solution (NDAS) will use a combination of AI and statistics to try to assess the risk of someone committing a gun or knife crime. A prototype is expected by March 2019.

In this first-of-its-kind project in the world, the team in charge gathered more than a terabyte of data from local and national police databases, including records of people being stopped and searched, and logs of crimes committed. Around five million individuals, or about 7.5% of the UK’s population, were identifiable from the data.

Looking at this data, the software found nearly 1,400 indicators that could help predict crime, including around 30 that were particularly powerful. These included the number of crimes an individual had committed and the number of crimes committed by people in that individual’s social group.

NDAS will use these indicators to predict which individuals may be on the path of violence similar to that observed in the past, but who haven’t yet acted. Such people will be assigned a risk score, indicating the likelihood of future offences. There will be no pre-emptive arrests, but support from local health and social workers will be offered.

The effort is obviously well-intentioned, but doesn’t it raise some serious ethical issues? There is the matter of privacy—barging in on someone to counsel him because an algorithm has predicted that he could be thinking of committing a crime. After all, it is only a matter of statistics, and statistical forecasts do not apply at the individual level. And how can you ever check whether the prediction was accurate? Because, logically, whether the counselled person does or does not commit a crime in the future—in both cases, the prediction was correct.

The predictions may apply at a community level, but here, there is a strong chance of a feedback loop operating that will keep on biasing the algorithm and the system.

Every state or city has areas that are supposed to be more crime-prone. These are usually poorer neighbourhoods, or, say in the West, localities inhabited by racial minorities. So, more policemen are deployed there, and where there are more policemen, there are more arrests. These arrests will be recorded in the database, whether the people arrested are guilty or innocent. Thus, the algorithm will keep predicting more crimes in these same areas, which may not have much to do with the actual crime rate in the region.

In fact, this is exactly what has been observed in the US, where several states such as California and Florida use a software called PredPol to deploy police officers. More and more policemen are being sent to the same well-policed neighbourhoods, while areas where actual crime rates are higher than expected are being neglected.

The UK system may also fall into the same trap and ultimately end up increasing social tension, and deepening fissures in society, with little impact on the actual crime rate. The poor and the minorities may feel more threatened. After all, policemen, now with the supposedly infallible AI guiding them, will be even more suspicious, and suspicion is different from vigilance.

Like everything to do with AI, an algorithm is only as good as the human inputs that helped build it. It can predict, but has no sense of consequences. Human sensitivities are the most critical aspects of any AI design.

The UK police came up with the idea of NDAS because the government has cut police funding substantially over the last few years. So, the police have had to turn to computers rather than humans. Leaving people-facing functions vital to society at the mercy of algorithms does not seem to be a good idea at all in the long run.

The development was reported by livemint.com

, , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *