International regulatory commissions are created for areas whose scope and work affect the whole world such as air flight routes, sea transport routes, nuclear power, internet and website, mobile phones, credit cards, banking, space, etc., to make them effective and safe, by maintaining regulation, control, and monitoring system for safe and smooth operation.
Similarly, given the wide reach and increasing acceptance of artificial intelligence, the demand for the formation of a global regulatory commission has started to arise. It is noteworthy that more than 500 crore people in the world use the Internet, and are also called digital citizens. In such a situation, artificial intelligence is being used globally for fear of threat to their financial security, privacy protection, and physical security. The need to control the development of intelligence is being strongly felt.
It has been observed that AI is good for improving logic and intelligence-based functioning but arbitrary use of AI can prove dangerous where decision-making requires human sensibilities, understanding, etc. The lack of regulatory and policy setting for AI is an emerging issue in global jurisdictions. The risks of adopting fully uncontrolled AI are greater than we imagine, with its fully advanced form capable of threatening the very existence of human civilization. The best kind of powerful AI system should be made only when we are sure that by controlling its negative aspect, only positive impacts will be on society. AI competing with human intelligence can lead to many known and unknown threats to humanity and society.
The world is gearing up to embrace trusted AI in the face of massive digital transformation, and the AI experiments we’ve come across so far have won people’s trust. AI has established itself as a very powerful tool to accelerate economic activity. Due to the increasing penetration and utility of AI in various sectors, it is believed that by the end of 2023, the total global investment in the AI sector will exceed $300 billion, and by 2030, the contribution of AI to the global GDP will be about $16 trillion. In today’s time, laboratories doing research on AI are capable of doing anything, so it is very important to have an international regulatory commission so that such an important technology is not misused for the fulfillment of vested interests.
Developed countries on the global stage have already started thinking about this. America is using its existing laws and is working on a plan to make them more strict as needed, AI in China Regulation is being drafted, the AI authority in the UK will draw up new guidance and examine its impact on consumers and businesses, Israel publishes proposed draft on AI policy and seeks inputs from departments, Italy plans to review AI platform and appoint experts, India has also announced policy formulation soon. Other countries too are seriously considering this.
By the end of 2023, it is expected to take the shape of a global regulator that will study potential risks, propose detailed ethical principles for responsible artificial intelligence, and provide an ever-changing approach to effective oversight. UK Prime Minister Rishi Sunak has proposed to set up a headquarters in London to oversee AI-related activities. Despite all these efforts, the looming concern is that a one-size-fits-all approach to regulating a technology as wide and pervasive as AI will not be effective.
Regarding AI regulation, India believes that the suggested regulatory interventions will respond to the size, nature, and potential risks associated with the design, development, and deployment of different AI for different uses. The basic principle for this approach is, ‘the greater the potential for harm, the stricter the regulatory requirements and the more far-reaching the limits of regulatory intervention.’ Self-regulation will be implemented where the risk of harm is low.
Whereas, where the risk of harm is high, legal intervention has been proposed. Other jurisdictions, such as the European Union, have adopted a similar risk-based approach and created a multi-level risk scale, unacceptable risk, high risk, limited risk, and minimal risk. The careful harmonizing and combination of self-regulation and government-led regulation offer a great opportunity for the AI industry to lead the way and avoid overly strong regulation by developing principles and practices consistent with contextual needs.
To read more such news, download Bharat Express news apps