Skip to main content

AI - UNICEF at World Economic Forum Calls for Governments and Corporations to Exercise Caution / Cites Potential Impacts on Children : Moonshot




Steven Vosloo, UNICEF Digital Foresight & Policy

See the related UNICEF policy brief for WEF

Tech companies and politicians must act to understand the potential impact generative AI will have on children as they grow up with its increasing ubiquity. There are plenty of benefits to generative AI but there are also many unanswered questions and risks for children, Steven Vosloo, digital foresight and policy specialist at UNICEF, writes in a blog post. UNICEF has published a briefing called Generative AI: Risks and opportunities for children.

“Policymakers, tech companies and others working to protect children and future generations need to act urgently. They should support research on the impacts of generative AI and engage in foresight – including with children – for better anticipatory governance responses”, he writes presenting the UNICEF brief at World Economic Forum.

“There needs to be greater transparency, responsible development from generative AI providers and advocacy for children’s rights. Global-level efforts to regulate AI, as called for by UN Secretary-General António Guterres, will need the full support of all governments.”

Vosloo says that as children and young people are the largest demographic cohort spending time online and given the pace of generative AI development and uptake, it is crucial to understand generative AI’s impacts on children.

“AI is already part of children’s lives in the form of recommendation algorithms or automated decision-making systems and the industry embrace of generative AI indicates that it could quickly become a key feature of children’s digital environment. It is embedded in various ways, including via digital and personal assistants and search engine helpers.”

He writes that the analysis and generative capabilities can be applied in various sectors to improve efficiencies and develop innovative solutions that positively impact children.

“But generative AI could also be used by bad actors or inadvertently cause harm or society-wide disruptions at the cost of children’s prospects and well-being.”

“Generative AI has been shown to instantly create text-based disinformation indistinguishable from, and more persuasive than, human-generated content. These abilities could increase the scale and lower the cost of influence operations. Children are particularly vulnerable to the risks of mis/disinformation as their cognitive capacities are still developing.”

“Longer-term usage raises questions for children. For instance, given the human-like tone of chatbots, how can interaction with these systems impact children’s development? Early studies indicate children’s perceptions and attributions of intelligence, cognitive development and social behaviour may be influenced.”

“Also, given the inherent biases in many AI systems, how might these shape a child’s worldview? Experts warn that chatbots claiming to be safe for children may need more rigorous testing.”

“As children interact with generative AI systems and share their personal data in conversation and interactions, what does this mean for children’s privacy and data protection? Australia’s eSafety Commissioner believes that in this context there needs to be greater consideration of the collection, use and storage of children’s data, particularly for commercial purposes.”

Comments