Born out of a research project between the Los Angeles Police Department and UCLA (the University of California, Los Angeles), PredPol bills itself as the market leader in predictive policing. It uses a machine-learning algorithm that draws on continually updated historical datasets to “predict critical events and gain actionable insights” for police. The attractions of predictive policing, and similar tools powered by artificial intelligence (AI), to law enforcement authorities are obvious: policing resources can be deployed more efficiently, and particular types of crime and criminal hotspots can be identified and blitzed, while police chiefs are better equipped to spot longer-term trends.
There’s comfort in cold, hard data too. Yet algorithmic policing, in force across the United States since the early 2010s, has long proved highly contentious. Programmer biases and pre-existing discriminations can become self-perpetuating with, for example, intensification of police patrols in crime-ridden neighbourhoods leading to higher arrest rates and therefore further patrols. Those born into such areas are likely to suffer stigmatisation and often worse, whether they have ever broken the law or not.
View full article (pdf) >