Crime and Punishment: An UNTIL Interview with Irakli Beridze and Odhran McCarthy on How the UN is Preparing for the Era of Smart Policing

These days, increasingly, decisions relating to crime and justice are being influenced or informed by machines and algorithms with immense statistical accuracy. READ MORE

 

While the step-by-step instructions that computers follow to evaluate micro-data and carry out crime and justice related analysis - including for decisions affecting people - have become more sophisticated, dangers such as “bias” (defined broadly as outcomes that are systematically less favorable to individuals within a particular group when there is no relevant difference between groups that justifies such abuse) are a growing risk.

Increasingly the United Nations is being asked by Member States to assist in the development of artificial intelligence (AI) and complex decision-making processes. The Center on Artificial Intelligence and Robotics at the UN Interregional Crime and Justice Research Institute (UNICRI) is one of the entities contributing to this. Irakli Beridze, Head of Centre, and Odhran McCarthy, Senior Fellow, are at the helm of this, charting the course for UNICRI.

 

 

UNTIL:  The field of technologies like AI to enable peace and prevent crime is rather new. What would you say have been its signal achievements so far?

 

Beridze: Yes, it’s a relatively new field and the fact that it’s rather new, and rapidly advancing, means that everyone is constantly playing catch-up. Under the broad umbrella of ‘Peace Tech’ though, one of our main focuses right now is understanding to what extent tools such as AI and robotics can be put to use prevent crime and strengthen criminal justice systems - specifically law enforcement. What we’ve seen in our work is that, generally speaking, law enforcement is still only beginning to venture out into this field.

 

McCarthy: That being said, we have seen a number of good practices have already popped up, such as the use of machine learning to identify legally privileged information or crime anticipation systems to support resource optimization. There are a lot of other interesting prototypes on the horizon. For instance, the use of automated database research for faster responses to requests for international mutual legal assistance; the automatic detection of online scammers and phishing; and crypto-based packet tracing tools enabling law enforcement to tackle security without invading privacy. There is a long road ahead, but there is a lot of potential with these technologies.

 

 

 

UNTIL:  Give us a brief overview of the field. Who are its leaders? Are some countries more advanced than others?

 

Beridze: There is no clear-cut ‘leader’ right now. At present, we believe that more than 40 states have launched some form of national initiative with respect to AI and related tech. Of these, around half have published official national strategies or plans to serve as a framework their actions. It’s very interesting to see this, as each country comes at issues such as research, development, investments and regulation from their own, often very different, perspectives. At the same time, while several member states are actively pursuing national initiatives on AI, many others lack any substantial AI activities. A leading concern for those states not engaged in AI and related technologies is that they may fall behind those that are already heavily innovating and investing, further deepening a digital divide. This is grounds for concern, both regionally and globally, as it can jeopardize social stability, bringing unmanageable migration, increased crime rates, and many other negative effects.

 

UNTIL: The term ‘peace tech’ sounds innocent enough. But aren't there real concerns about the abuse of human rights that some innovations can bring?

 

Beridze: Very much so. The world is still very much trying to come to terms with surveillance, let alone the massively intrusive scale that AI-based technologies imply. It is important for us as a society to ask if we are ready for the use of facial recognition by law enforcement and the establishment of an extensive network of surveillance devices and sensors to become the norm, and to what degree society is willing to permit an increased law enforcement presence in their private lives, even if it is in the interests of public safety and security. Going forward, it is also going to be important for us to try to strike a balance between the right to privacy and some level of surveillance to track, for example, online phishing, child pornography and other malicious practices. 

 

McCarthy: Another alarming issue that arises in connection with the use of these technologies is the risk, and indeed, reality of bias. How can we make sure tools are designed in a way that does not represent existing biases of gender, religion, ethnicity, origin, etc.? How do we make sure they remain transparent in their use, so that we can understand where and how bias might emerge?

 

Beridze: Correct, both privacy and bias are definitely issues that need to be thoroughly thought through now, before these technologies are fully integrated into crime prevention and criminal justice because the fact is that applying certain technologies is rather like a one-way street, once implemented there is no easy way back. I’m pleased to say though that in our work with law enforcement, we have seen strong sensitivity towards these issues, as well as an openness to tackle these concerns head-on.

 

UNTIL:  How widespread is the use of predictive policing? How worried should we be?

 

McCarthy: First of all, it is important to clarify that when we talk about predictive policing, we’re not talking about the ‘Minority Report’ kind of predictive policing, where police identify who will commit a crime before it occurs. Rather, we’re talking about aresource optimization tool – the use of data collected by police departments about the type, location, date and time of past crimes to generate a forecast of when, where and what types of crimes are most likely to occur. This kind of predictive policing has already been developed and deployed in several cities across the globe. For instance, in the United States, predictive policing systems have been used in cities such as Chicago, Los Angeles, New Orleans and New York as far back as 2012.

 

Outside the US, it has been reported that police departments in China, Denmark, Germany, India, the Netherlands, and the United Kingdom are also reported to have tested or deployed predictive policing tools on a local level. Japan has also announced its intention to put a national predictive policing system in place in the run-up to the 2020 Tokyo Olympics and a predictive policing programme was recently approved in the United Kingdom that could be rolled out to all national police forces in the near future. It’s an important technology worth exploring, as it could help law enforcement combat crime more efficiently. That’s not to say though that its not without its challenges. Bias is, again, one of the big issues here. How do we ensure that communities do not suffer at the cost of aggressive policing (or even under-policing) on the basis of biased decision-making?

 

UNTIL: How will AI transform law enforcement or even peacekeeping? What will it look like in 2030 compared to now?

 

McCarthy: This is difficult to predict, because so much depends on whether we enter a collaborative or a competitive paradigm. A collaborative engagement means global law enforcement practices will not be based on taking extreme measures, emerging from distrust, preventive speculation and secrecy, but actually the opposite: positively evolving standards to ensure compliance with human rights and based on resource-sharing, open source practices and mutual trust across all fronts. This is up to if and how States, institutions and corporations engage in the global conversation in the coming years.

 

Beridze: If AI is indeed increasingly integrated into law enforcement, it is also quite conceivable that it by increasingly automating certain law enforcement tasks, we could have the opportunity consider entire new perspectives to law enforcement and, for instance, refocus the efforts of law enforcement officers on engaging with the community and the more social and human functions that are believed to be beyond even the most advanced machines. This rise community engagement could be a possible future of an AI-enabled police force.

 

UNTIL: How is AI being used to advance the SDG's?

 

Beridze: The potential for these technologies to contribute to the SGDs is huge. To quote the UN Secretary-General, they can “turbocharge” progress toward the SDGs. To give a few examples, there is a lot of hope in terms of using AI to identify and report child sexual abuse material  images, thereby contributing to Target 16.2, or to identify patterns in financial data that may indicate the presence of human trafficking networks, Targets 5.2 and 8.7, or to monitor payments for irregularities that might indicate fraud or corruption, Target 16.5. The technology is there, its just up to us to tap into it and make the most of it. The AI for Good Platform, which is hosted by ITU and engages nearly 40 other UN entities, is an important initiative in generating discussion and action in this regard and it promotes the identification of AI-related projects.

 

UNTIL: Has crime prevention technology kept pace with the ability of criminals to use technology for illegal activities? What more needs to be done? What role can the UN play?

 

McCarthy: Even though the means to develop such technologies exist and are, to a certain degree, open source or commercially available, AI has not yet played a significant role in crime or terrorism. While we have seen the telemarketers behind robocalls turning to AI to automate processes and there have been a few alarming instances involving technologies such as drones, that could leverage AI technologies, we have not yet substantially observed the use of AI by criminal or terrorist groups. That is not to say that this will always remain the case. There is simply a lack of empirical evidence at present on the development and use of such technologies for malicious purposes. As AI becomes more integrated into the functioning of our society and, as costs and technical knowledge required decrease, we’re likely to see an increase in activity in this field. In terms of what can be done, it’s important for us to closely monitor this space and stay informed of the latest technological developments in order to stay one step ahead in preventing and reacting to any possible malicious use-cases.