Skip to main content

A nightmare for regulators : Ramesh Shrestha

 


Where do we begin? 

In 1984 a Danish doctor, Steen Malte Willadsen was the first to clone a sheep using genetic material from an embryo and adding them to sheep eggs whose nuclei had been removed thus producing genetically identical lambs. However, it was the team of four doctors from the Department of Regenerative Medicine, University of Edinburgh who cloned the first mammal using a cell extracted from an adult sheep (not using embryonic material). Since then, at least 21 different animals have been cloned (cow, rat, camel, carp, cat, coyote, deer, dog, frog, goat, horse, mouse, mouflon, mule, two species of monkey, donkey, ibex, rabbit, buffalo & wolf). Such cloning was conducted for commercial purposes while some were for reproduction of endangered animal species. There are ethical issues on both counts. 

Bioengineering and nanotechnology have evolved exponentially over the decades which helped the scientists to organ transplant, grow stem cells to treat certain conditions, etc. The issue of bio-ethics has been somehow managed thus far, possibly because these technologies have not been producing ‘designer babies’ yet with predetermined genetic makeup. 

The release of ChatGPT has given a whopping jolt to the politicians and regulators everywhere. Such open AI software, available in any App store, is able to mimic people’s voices and syntax and prepare any kind of speeches or documents requested by the user. This software has the potential to do significant harm to society, intentionally or unintentionally. People just have to keep in mind Murphy’s law. 

Imagine a scenario - should someone release a fake conversation between President Putin and President Xi Jinping to destroy US bases in the South China Sea and agree to establish a joint Chinese-Russian military base in Mexico all generated by ChatGPT with its voice cloning ability. With the modern snooping device it is not an impossible. 

Open AI has made one more step with RizzGPT (still a prototype, March 2023) which monitors conversations of people using a small mic which translates them to text simultaneously and prepares a reply and sends to the headphone of the person belonging to the mic with a delay of five seconds. With the spread of 5G, ChatGPT, RizzGPT and AR glasses biohacking has become a reality. Goodbye to privacy! 

There is yet another device being experimented by an AI company, Neurable, a headphone with silver electrodes lining on its ear pad which can monitor brain activities. It detects how well the brain is focusing on the task being performed through brain-headphone interface, a kind of neuro-hacking. It monitors cognitive function and detects when the cognition slows down. Once the individual loses focus, meaning slowing brain function, the device plays a message into the ears - ‘it is time to take a break’. 

AI beyond toys & games

It is becoming increasingly clear that AI technology is steadily taking over the cognitive function of humans. It is listening to conversations, analysing documents & preparing summaries for the users and so on. Human beings are supposed to use technology for development but the technology appears to be taking over human actions. 

On 19 May Foreign Policy Journal released Russia’s next step in its war efforts in Ukraine based on an analysis by AI. It predicted a shift in Russia’s military strategy, and intensification of drone offensive from both sides. The AI also predicted a combination of old and new warfare tactics including trench warfare alongside kinetic operation. It also envisages a new paramilitary group funded by Russian gas giant Gazprom to maintain control over Wagner Group. 

The use of AI is no longer about GMO or stem cells or high-speed cars. The AI is now able to invade the human mind and take over human’s cognitive capacity. Are we surrendering human autonomy to technology? Who is in charge? 

Is regulation the answer? 

With the release of ChatGPT there have been calls to regulate the use of such technology. There are also some fundamental questions. Who guarantees the accuracy of documents produced by ChatGPT? These documents do not involve human thinking but machine learning. Could such a document be legal? If certain actions are taken based on such analysis and if turned out to be false, who will take the responsibility? Should there be a legal issue, whom to sue - the developer or the user? Given the positive and negative potentials of this technology even the creator of this technology, Sam Altman and co-founder of Apple Steve Wozniak have suggested that it will require the government(s) or a powerful agency to take over regulatory role with authority to take away licenses from the users to ensure compliance with safety standards. 

Unanswered question

Who has the authority and competence to regulate this technology, developed by private companies spending millions and years of hard work? The UN SG intends to create a scientific advisory board and favours creating a knowledge-based AI agency similar to the IAEA with full regulatory power. 

Meanwhile there are some sideshows in place. UNESCO informed recently that it had hosted a Ministerial level virtual meeting on ‘UN’s first robotic conference’ at the end of May with selected participants to discuss implications of AI activities. To the credit of UNESCO, during the UNESCO Global Conference in November 2021 it had called for ethical application of AI technology while protecting ‘cultural diversity’ of countries and good of humanity and must prevent harm to societies and the environment. It also emphasised the need for global cooperation of multiple stakeholders across various disciplines. 

Yet another agency, International Telecommunications Union (ITU), claiming to be the ‘UN tech agency’ announced that it is convening ‘AI for Good Global Summit in Geneva from 6-7 July to showcase AI and robot technology as part of global dialogue on how artificial intelligence can serve as forces for good. The SG has an unenviable task of herding cats in the UN system. 

Jumping the gun?

It is imperative to have one coherent voice on behalf of the UN on the application of this technology which will complement the recommendations of the private sector developers and users alike. The UN as the largest global organisation has experience and can put together required expertise for global level intergovernmental cooperation to engage with the private sector as well as enforcing agency. 

It is beyond doubt that a framework of governance is needed to guide further research and application of AI. A global summit of AI experts, investors, marketeers and enforcing agencies, including Interpol is essential with a set of agenda which will lay out an enforceable code of conduct on what is and is not allowed in application of this technology. It should take into account privacy, accountability and mitigating ability for potential harmful consequences. 

The CEO of Neurable is arguing that there is a lot of fear-mongering around AI and its potential harms to society. This CEO is suggesting that the authorities should hold off any idea of regulations for now to let AI technology continue to develop. He believes that there are still a lot of innovations to be made. He thinks that ‘we are still far away from anything related to dystopia’. This view is in sharp contrast to the creator of ChatGPT. 

Challenge ahead 

The UN as the global agency with its universal reach can play the regulatory role and set standards for application of AI technology. But AI technology is the brainchild of the private sector with years of hard work, millions in investments and returns. The UN has been conveniently used, abused, misused and sidelined at its convenience by powerful countries. All these AI developments are taking place in these highly industrialised countries such as the USA, EU countries and China with their own digital empires and own economic and political agendas. Will any of these countries listen to the cries of the UN system? 

In all likelihood these private AI developers will not want the UN or any entities to interfere in their works and profits. The UN could however, convene a global summit to get the works moving in developing an enforceable universal code of conduct that should be made mandatory by all UN member states with a local watchdog in all UN member states. 

The use and misuse of AI is far more consequential than UN convention CRC and its Optional Protocol which many countries signed with reservations and as a result preventing child labour, child marriage and preventing recruitment of children in military forces could not be fully implemented. The Code of Conduct of AI should be signed by all UN member states without any reservations. Will the UN be able to regulate AI technology with local watch dogs no matter which institution or company develops the AI Code of Conduct? 


Comments