Skip to main content

UNICEF's Guidelines on Artificial Intelligence - Version 2.0 : UN News / Tom McDermott



Governments in North America and Europe have devoted considerable attention lately to the impacts of Facebook, Instagram, YouTube and other internet services on children.  Children are being influenced, led and molded by global forces outside family, classroom and community.  Moreover, individual children are identified, listed, and categorized for future profit.  Data about individual children is sold and resold as well as stored for future use long before they will become adults.

Aside from government hearings and discussions around possible new regulations, it is not clear to me that much has so far been accomplished.   Even when they have happened, these discussions have so far been confined to a few countries in the West.  

So it was nice to see a new report by UNICEF on its global consultations on AI and children.   You probably will not have time to read the full document, but in case you do, it is linked below.  The video and the interview will give you an overview.  What is useful for all of us is to keep alert to the development of artificial intelligence and the likely impacts on the world's children.     

On the one hand - 

It is good to see that UNICEF is attempting to tackle issues around the application of artificial intelligence to children. It is even better to see that the Office of Global Insights and the Government of Finland have undertaken this review with the participation of 250 children in five countries. It is also good to see that the resulting document is Version 2.0 and a major step beyond the version produced in 2020.  Moreover, the authors acknowledge that this set of guidelines is part of an evolving process that is still in its early days.

On the other hand - 

I was a bit disappointed by the actual guidelines, which I find still too vague to guide practical action by governments or corporations using AI.  They also provide little guidance for parents and children themselves.   

Here they are: 

While a good start, the guidelines are only that - a good start on a very complex set of issues.  Much more work will be needed at both international and national level to sort out policies and regulations.

In the interview below Stephen Vosloo mentions the problem of trying to balance a profit motive against ethical considerations.  Unfortunately, we know too well that such a 'balance' is seldom found.  Moreover, even if a particular government sets out AI policies that fit the needs of their children, the technology of AI will continue to be driven by global companies like Facebook and Google situated far beyond their national borders.   

Vosloo also voices the concern that government ministers will leave these decisions to the technicians.  My concern is rather that neither government ministers nor technicians will make the decisions.  Instead, that the decisions will be made by corporate boards and CEOs whose primary concerns are stockholder returns, financial profits and fending off government regulations.

Tom McD

 


27 November 2021
UN News / UNICEF/ Diefaga

Digital Child’s Play: protecting children from the impacts of AI



Children are already interacting with AI technologies in many different ways: they are embedded in toys, virtual assistants, video games, and adaptive learning software. Their impact on children's lives is profound, yet UNICEF found that, when it comes to AI policies and practices, children’s rights are an afterthought, at best.

In response, the UN children’s agency has developed draft Policy Guidance on AI for Children to promote children's rights, and raise awareness of how AI systems can uphold or undermine these rights.

Conor Lennon from UN News asked Jasmina Byrne, Policy Chief at the UNICEF Global Insights team, and Steven Vosloo, a UNICEF data, research and policy specialist, about the importance of putting children at the centre of AI-related policies.

AI Technology will fundamentally change society.

Steven Vosloo 

At UNICEF we saw that AI was a very hot topic, and something that would fundamentally change society and the economy, particularly for the coming generations. But when we looked at national AI strategies, and corporate policies and guidelines, we realized that not enough attention was being paid to children, and to how AI impacts them.

So, we began an extensive consultation process, speaking to experts around the world, and almost 250 children, in five countries. That process led to our draft guidance document and, after we released it, we invited governments, organizations and companies to pilot it. We’re developing case studies around the guidance, so that we can share the lessons learned.

Jasmina Byrne 

AI has been in development for many decades. It is neither harmful nor benevolent on its own. It's the application of these technologies that makes them either beneficial or harmful.

There are many positive applications of AI that can be used in in education for personalized learning. It can be used in healthcare, language simulation and processing, and it is being used to support children with disabilities.

And we use it at UNICEF. For example, it helps us to predict the spread of disease, and improve poverty estimations. But there are also many risks that are associated with the use of AI technologies.

Children interact with digital technologies all the time, but they're not aware, and many adults are not aware, that many of the toys or platforms they use are powered by artificial intelligence. That’s why we felt that there has to be a special consideration given to children and because of their special vulnerabilities.



Privacy and the profit motive

Steven Vosloo The AI could be using natural language processing to understand words and instructions, and so it's collecting a lot of data from that child, including intimate conversations, and that data is being stored in the cloud, often on commercial servers. So, there are privacy concerns.

We also know of instances where these types of toys were hacked, and they were banned in Germany, because they were considered to be safe enough.

Around a third of all online users are children. We often find that younger children are using social media platforms or video sharing platforms that weren’t designed with them in mind.

They are often designed for maximum engagement, and are built on a certain level of profiling based on data sets that may not represent children.




Predictive analytics and profiling are particularly relevant when dealing with children: AI may profile children in a way that puts them in a certain bucket, and this may determine what kind of educational opportunities they have in the future, or what benefits parents can access for children. So, the AI is not just impacting them today, but it could set their whole life course on a different direction.

Jasmina Byrne 

Last year this was big news in the UK. The Government used an algorithm to predict the final grades of high schoolers. And because the data that was input in the algorithms was skewed towards children from private schools, their results were really appalling, and they really discriminated against a lot of children who were from minority communities. So, they had to abandon that system.

That's just one example of how, if algorithms are based on data that is biased, it can actually have a really negative consequences for children.

‘It’s a digital life now’

Steven Vosloo 

We really hope that our recommendations will filter down to the people who are actually writing the code. The policy guidance has been aimed at a broad audience, from the governments and policymakers who are increasingly setting strategies and beginning to think about regulating AI, and the private sector that it often develops these AI systems.

We do see competing interests: the decisions around AI systems often have to balance a profit incentive versus an ethical one. What we advocate for is a commitment to responsible AI that comes from the top: not just at the level of the data scientist or software developer, from top management and senior government ministers.

Jasmina Byrne 

The data footprint that children leave by using digital technology is commercialized and used by third parties for their own profit and for their own gain. They're often targeted by ads that are not really appropriate for them. This is something that we've been really closely following and monitoring.

However, I would say that there is now more political appetite to address these issues, and we are working to put get them on the agenda of policymakers.

Governments need to think and puts children at the centre of all their policy-making around frontier digital technologies. If we don't think about them and their needs. Then we are really missing great opportunities.

Steven Vosloo The Scottish Government released their AI strategy in March and they officially adopted the UNICEF policy guidance on AI for children. And part of that was because the government as a whole has adopted the Convention on the Rights of the Child into law. Children's lives are not really online or offline anymore. And it's a digital life now.

This conversation has been edited for length and clarity. You can listen to the interview here.





Comments