Time to demystify the use of AI in government or risk losing it altogether

30 Oct 18

We risk missing out on the potential Artificial Intelligence has to improve public services if people don’t trust what governments are up to, says the Centre for Public Impact’s Danny Buerkli.

 

When it comes to AI, we hear and fear either the hype or the horror.

The excitement surrounding AI is officially out of control. Publications are full of wild-eyed headlines about how AI will magically solve all kinds of problems that governments deal with from wildfires to cancer diagnosis.

This irresponsible combination of hype and scaremongering reached absurd heights with ‘Sophia the AI robot’, an animatronic robot that would be right at home in an amusement park ride, being presented as an intelligent entity and receiving Saudi citizenship. Soon publications started to worry about whether “Sophia” might want to ‘destroy humans’.

The potential of AI to improve the way we deliver public services is enormous - but not in the way breathless stories about this technology taking over the world would make us believe.

Both the hype and horror get in the way of a clear-eyed assessment of what AI can and cannot do.

In order to cut through the scaremongering narrative of ‘machines taking over the world’, we need to take measured steps to build legitimacy for the use of AI in government as we proceed. For AI in government to be successful, it needs to be designed and implemented in a legitimate way -  in a way that commands trust and understanding.

I ran a roundtable debate on the potential of AI in government at the recent Tallinn Digital Summit, a meeting of some of the world’s most digitally advanced governments. The ministers, senior civil servants and technical experts all agreed that using AI in a way that builds trust and legitimacy in from the get go is critical.

Recent polling has shown that citizens worldwide are generally positive about government’s use of AI. The level of support, however, varies a lot by use case. As the use of AI expands into more sensitive domains, citizens are beginning to worry. For example, 51% disagree with the use of AI to determine innocence or guilt in a criminal trial.


‘People will only accept the use of AI in public services and policymaking when they trust it. If they don’t, we will quickly see a backlash forming and we’ll lose out on the promise and potential of this technology.’


People will only accept the use of AI in public services and policymaking when they trust it. If they don’t, we will quickly see a backlash forming and we’ll lose out on the promise and potential of this technology.

Using AI in government thoughtfully and in a way that is seen as legitimate is possible. Governments are, however, just learning how to do it.

In 2012 Durham Constabulary, the police force responsible for the area around Durham in the northeast of England, began developing an AI-based tool which supports custody officers in assessing the likelihood that an individual will re-offend.

While many open questions remain about how exactly the tool performs its introduction has been comparatively thoughtful and deliberate.

The police force has been relatively open about the tool and has made details about the model publicly available. The introduction of the risk assessment tool was also set up as an experiment with external research partners from Cambridge University who provide an external review of the tool’s effectiveness.

The only way to make the promise of AI in government come true is to do this with citizens rather than to them, to develop systems with those working in public services and the public rather than for them.

This requires government to operate in ways it’s not necessarily used to. It requires, for example, empathising with the needs of citizens and civil servants and building AI systems that are resolutely open to external scrutiny. We have set out a practical plan to help governments achieve this.

Now is the right time to improve public service delivery and policymaking with the help of AI but we need to do so without the hype or the horror.

Unless we do this with care, governments will not get the broad public support they need for these technologies to be successful in improving people’s lives and delivering better public services.

There’s lots at stake here and neither governments nor citizens can afford to lose this opportunity.

  • Danny Buerkil
    Danny Buerkli

    programme director, The Centre for Public Impact (CPI), a global not-for-profit foundation, founded by The Boston Consulting Group (BCG). It is committed to helping unlock the positive potential of governments to improve outcomes for citizens.

    Buerkli has been leading its research and engagements related to the use of Artificial Intelligence in government.

Did you enjoy this article?

Related articles

Have your say

Newsletter

CIPFA latest

Most popular

Most commented

Events & webinars