Skip to main content

Regulating artificial intelligence: it is like herding cats : Ramesh Shrestha


A gradual evolution

Artificial intelligence (AI) has been with us for decades such as CCTV cameras in streets since the 1960s (which has been refined in 70s, 80s, and 90s for accuracy), Google map since 2005, Siri since 2011, Apple navigation map since 2012, Alexa since 2014, Google assistant since 2016, etc. Self-driving cars were successfully tested in 1995 and Tesla has been in the streets since 2008. Many software has been used in health, medicine, engineering and other fields for a long time; it just did not attract people’s attention.

Here is a tiny example of AI application in radiology. The Clinical Cancer Imaging Center at King’s college, London trained an AI based x-Raydar medical imaging system by using 2.8 million x-rays from 1.5 million patients collected over a 13-year period from 3 hospitals to read x-rays, with 94% accuracy for 37 conditions. The programme also uses a language model based on historical reports of the above x-rays and can generate text reports in real-time. The diagnosis and text reports were verified by a group of independent radiologists on randomly collected 1400 samples with 100% accuracy. The Center plans to use Chat-GPT4 to further enhance its diagnostic system. This is just a small example of the potential beneficial application of AI which also avoids human error and human bias. Benefits of AI have no limits.

But this time it appears different

Chat-GPT has been in the making for a while with Ver 2 being released on November 2022, which was subscribed by 100 million people within 2 months. The recent release of Chat-GPT4 which can simulate human-like function with intelligent responses and decisions based on simultaneous analysis with lightning speed sent a wave of fears and curiosities. Its ability for voice recognition and voice mimicking, and image creation and making predictions based on instructions has startled many. There are already 180 million subscribers as of March 2024. People’s immediate concerns are the potential of AI’s strength being misused. There are also copyright issues as the system is able to read and quote anything and everything available in cyberspace without anyone’s permission or acknowledgement. There are already several lawsuits filed against Chat-GPT by publishers and authors for copyright violation.

With the release of Chat-GPT4 many industries are looking forward to its application in retail and e-commerce, food industries, banking & financial services, supply chain logistics, real estate, health and medicine, media and entertainment industries with a belief in further enhancing productivity. The ultimate aim is efficiency gain, service enhancement and revenue growth. Will it result in loss of employment for many? A question for which there is no simple answer. But the bigger question is how to regulate AI application for betterment of mankind while protecting mankind from potential harm by its misuse?

Current experience

There is also a contradiction that no one is able to address. There are privacy laws which forbids use of personal information without permission. Yet, the social media networks thrive on personal information collected from people’s daily communications using smart devices. We are dealing with a complex relationship between ethics, rights and profit based on data generated by individuals globally on daily basis. Most devices we use, such as Google home are listening to our conversation. Similarly, the devices we use to look for information, every part of information people share in all social media and people’s verbal messages and written chats we share are monitored and stored somewhere in cyberspace. There is no question of anyone granting permission for the system to do so as the platforms built automatically accumulate data from all sources to develop a pattern (algorithm) to understand consumer behaviour which is then tailored to send messages, visuals, etc. to the users.

People have benefitted from all social media with easy access to information, communication and businesses but social media has also become disruptive. There have been complaints against Facebook, Instagram, X (Twitter), Snapchat, etc. for deliberately targeting children and youth by sending addictive messages causing anxiety and depression among children and youth. But the authorities have not been able to come up with any solution to regulate any social media. The issue is conflict between freedom of information and idea to control what should and should not be in the public domain. Few countries such as China, Iran, Russia and North Korea have banned Facebook. Is it the right solution; depends on whom, you ask.

Another aspect of the AI is the increasing economic frauds by stealing banking information through credit/debit cards, by accessing internet banking passwords, or by sending fake links to hack accounts, ATM frauds, etc. Between 2021 and 2022, people lost more than $30 billion through fraud with Mexico, US and Indian ranking the top three. The strength of ChatGPT4 will likely further enhance financial criminality worldwide.

Regulating AI, an impossible task?

With the dawn of the internet, we have seen a gradual release of numerous hardware and software which assisted personal and professional development of people. At the same time its application in government institutions helped improve bureaucracy as well as development works. The idea of regulating these products was not seen as anything but a copyright issue. But the sudden spike in adoption of AI with its strength seems like the regulatory authority being hit by a tornado without warning. Given the potency of AI, the government authorities everywhere are concerned by lack of mechanisms to regulate its application.

The application of AI in military technology such as drones have given a new meaning of war in battlefields. In recent weeks Israel was reported to have used AI programmes named ‘Gospel’, and ‘Lavender’ in targeting Hamas in Gaza (The Economist 11 April 2024). One might say ‘all is fair in love and war’, but this dictum was pronounced in 1579 when people were fighting with spears and swords. The 21st century is a whole new world where the dominant fights the weak – an unequal war.

With the advent of ChatGPT4, the biggest fear is that man may lose control of technology. The CEO of OpenAI which developed ChatGPT4 has already warned that this technology is dangerous and could reshape society. There are few initiatives in developing regulatory mechanisms as in China, EU, Canada, the US, OECD and Brazil. But will there be an agreement on what to regulate and how much to regulate is yet to be seen. CEOs of some of the social media houses have been questioned and have been fined. But for them the network has been operating business as usual. They can afford to pay penalties and move on. On 8th April 2024 the CEO of OpenAI which developed ChatGPT was listed as the latest billionaire in Forbes list.

We always hope for the best against all odd as we have been doing in finding solutions to the climate crisis, but regulating AI with so many CEOs of diverse technology firms with different interests and visions would be like herding cats! Mission Impossible

Read more articles by Ramesh by clicking here
Contact Ramesh : ramesh.chauni@gmail.com

Comments