Skip to main content
Image of UNTIL Interview: Can Artificial Intelligence Expedite the SDG's ?
UNTIL Interview: Can Artificial Intelligence Expedite the SDG's ?

While computers that make data-driven decisions have become routine for much of the world, so far the field of international development has been slow to embrace machine learning and Artificial Intelligence. But enthusiasm is growing.

There is great potential for these tools to help achieve the Sustainable Development Goals by improving efficiency, detecting patterns in complex datasets, and increasing productivity. Yet the adoption of this technology poses ethical challenges too.

Kriti Sharma is a leader in emerging field of technology for development. She is the founder of AI For Good, UK, an organization at the forefront of using AI to address pressing social challenges, including domestic violence, inequality, and sexual health. She is also a UN Young Leader and serves as an advisor to the UK Government’s Center for Data Ethics and Innovation as well as serving on the Advisory Panel of UNTIL.

Q: How do you think AI can help the 2030 Agenda?
 
Sharma: We are at the point with the SDG’s where we need to accelerate growth. This is where AI and other emerging technologies can help. If you look at how AI is being implemented there’s a lot of investment and focus in making people click more ads or driving digital addiction. But we have a huge opportunity to also drive AI in a meaningful way to areas such as healthcare, education, sustainability and reducing inequality. That’s what’s getting me very excited. Also there’s a growing interest in the AI community to solving very big audacious problems. While AI can’t be an end-to-end solution to a lot of SDG issues it can be an enabler in making it happen and providing deeper insights.
 
Q: Can you describe some examples of how AI is being used to support development in the least developed countries?
 
Sharma: Two projects I’m working on right now are such examples. One is a project we launched in the north of India in partnership with the Population Foundation of India. It uses AI to give adolescents access to sexual and reproductive health information. Historically they would get this knowledge through community health workers or frontline organizations. It’s very face-to-face. But sexual health is an issue attached to a lot of stigma. In a country like India where a lot of youth are coming online for the first time, there is an opportunity to give them access to unbiased, non-judgmental, trusted information on these very sensitive topics. We did a pilot in the northern region and young people absolutely loved the fact that they could get engage with a non-judgemental digital companion – we built this in a very fun way, the AI design is in the form of a digital avatar called Dr. Sneha. Dr. Sneha talks to them about sensitive topics like menstruation, sexual and reproductive rights, domestic violence, healthy relationships, virginity and pornography. These are very important issues. In Rajasthan, where I grew up, data tells you that nine out of ten young girls don’t know what menstruation is until they have their first period. That’s something that happened to me also. It happens because there’s a lot of social stigma. It’s not part of the curriculum and young people have not been given this information. I think we can democratize that by building digital tools with AI that can give young women personalized information. The fact that we have a digital character that is giving them unbiased information – they are not talking to a human – they are opening up a lot more and they can trust it to get the right sources of information instead of going to porn websites. This is an example of using AI for development in a new way.

 
Another project that we launched in South Africa is called rAInbow, in partnership with Sage Foundation and the Soul City Institute of Social Justice.  Almost one-third (30 percent) of all women who have been in a relationship have experienced physical and/or sexual violence by their intimate partner. Domestic violence is an issue with huge stigma associated with it and few cases are actually reported. To counter this there are helplines, but they aren’t always open, which really doesn’t always help. So we built a tool in South Africa called rAInbow which is a companion for women at risk of domestic violence. rAInbow has conversations with women and gives them information and is there to provide emotional support, financial information about how to get back on their feet, about different forms of abuse and to answer their questions. Because they are not talking to a human they opened up a lot more about issues they were facing. Within the first 100 days of launch users had nearly 200,000 conversations with the AI system.


Q: Some say the field of international development has been slow to embrace the potential of technology. How receptive was the industry to AI?
 
Sharma: The trick I found is the right partnerships. Once you have the right partners everything becomes a lot easier. We are opening up in developing markets because of the user driven need. The young people prefer a more digital experience whether it’s about sexual education or domestic violence support.  The way we interact as members of society is changing and therefore civil society, non-profits, governments, and businesses need to react to that need, and I think that realization is growing and now that we have a few examples and good cases of how AI and other emerging technologies can be used in this area I think it’s only going to get better and easier. For the project in South Africa we had two partners –the Sage Foundation and Soul City who are very keen to look at new digital ways of doing things. In India we worked with the Population Foundation of India who are incredible – they totally understand the importance of digital technologies and already have a reach of 47 million people through their programs, but know that using digital they can create more personalized 24-7 interactions, which is not always possible through broadcasts. So I would say not everyone has joined the party but the ones who have see the benefits of the results. We reinforce that by showing results. These projects show that there’s a huge case to make it affordable and accessible to everyone.
 
Q: You write a lot about the biases of AI. Do you see different societies insert their own cultural biases into AI?
 
Sharma: AI is not very different to how humans function. AI learns from data. Humans learn from their surroundings.  AI is very much dependent on the underlying data and if that data is biased through years of history systemic biases then that’s what the machine is going to learn. For example, there’s evidence that AI algorithms discriminate against women’s candidates for senior positions and its not because the machine is sexist it’s because the machine is learning from the data and historically men have held senior positions, that’s what the algorithm will select. Another example is facial recognition algorithms where a study from MIT found the error rate for light skinned men is <1% whereas for darker skinned women it’s >35% and that’s not because the machine is sexist, but the facial recognition has not been given diverse datasets.
 
Q: There are growing calls for more ethical standards in AI. What’s the cause and how do you think should monitor these standards?
 
Sharma: Ethical standards in any emerging technology is very important. AI is a new area expanding at a very rapid scale. We have ethical standards for doctors, lawyers and accountants. But not for technologists.  We need to set up standards and rules for who creates the technology, how is it used, what data is it learning from. Algorithms are choosing what news and information you are presented, what jobs you should apply for,  whether your mortgage is approved. In fact in some countries even the criminal justice system is impacted.  That’s the scale of algorithmic decision-making. It’s critical that we build fairness, transparency, and accountability into these systems. Just because it’s an algorithm shouldn’t mean it’s above the law.
 
Q: Is there a movement?
Sharma: Progress is much greater now than before, the tech community is more aware, policy makers are taking a step in the right direction. For example, the UK government has done a lot of work. They have established an organization called The Center for Data Ethics and Innovation which is working with policy makers, civil society, and the industry to define what good looks like in more detail. Within the UN, I’ve spoken at the Commission for Science and Technology in Development and the UN ITU summit last year and we’re starting to see a growing movement about the need to take action and to do it now.
 
Q: What role does the UN have in this?
Sharma: More international cooperation would be very helpful. There are a lot of countries trying to do their own thing and trying to take a lead in ethical AI. We don’t need countries competing with each other, but collaborating and cooperating, and I think the UN could help facilitate that conversation and knowledge sharing. Ultimately AI is a very universal technology and something developed in one part of the world could well be used by others elsewhere. Therefore it’s a global issue. The EU Commission also issued their high level guidelines on the responsible development of technology.
 
Q: In what ways do you think our world will look differently in 2030 because of AI?
Sharma: Right now there’s so much hype about it. There’s a lot of excitement about AI. But ultimately it’s not about AI, it’s about what it can do for people. You see today that we get excited if a robot opens a door. I can’t wait to get to the stage where this is not news, when AI becomes part of our lives and helps make it more efficient, helps us reach more people who need support and helps create more equality. In an ideal world by 2030 we shouldn’t be talking about AI. It should already be adding value and equality to society. We will have people from diverse background collaborating on creating this technology.
 
Q: The field of development faces so many challenges. Why is AI important?
Sharma: This is such an exciting time to get involved. You don’t have to be a rocket scientist to leverage AI. It’s very much within reach. AI can be used to solve many difficult challenges. Through Sage FutureMakers in the UK I do a lot of work with young people who are curious about creating the future of work and the future of society. They think about solutions where they can use AI. These kids come up with the most imaginative most amazing problems they want to solve. For example, they want to build a companion for old people who live alone,  a personal tutor for children in developing countries who don’t have access to quality education, or a tool to compare climate footprints with each other that also gives personalized recommendations on how to reduce your impact. We need to showcase what AI can actually do for the society, like these young people who are using their imagination to apply AI for social good.

Secretary General

"The advances of the Fourth Industrial Revolution, including those brought on by a combination of computing power, robotics, big data and artificial intelligence, are generating revolutions in health care, transport and manufacturing.  
I am convinced that these new capacities can help us to lift millions of people out of poverty, achieve the Sustainable Development Goals and enable developing countries to leap‑frog into a better future."

23 March 2018, New York