Technology has passed through different filters of changes with the advancement of time, and today, AI has gained huge popularity as it has the potential to resemble human intelligence. And, at this very point, diverse opinions emerge regarding the deployment of AI, as a large section of people believe, in the upcoming days, Artificial Intelligence will make the human workforce inert. A report by PeW reveals that almost 70 percent of the U.S. population is skeptic regarding the inclusion of AI in the workplace. Interestingly, the oppositions of this stand believe, as a tool, AI cannot be a harmful factor to the human workforce; instead, it is an essential factor for the growth of the efficiency of the workforce.
Obviously, the advent of AI has paved the way for different regulations to squeeze out the best out of this cutting-edge technology. The New Executive Order from the U.S. encourages to leverage technological advancements across the various sectors including Federal Government, industry, and academia to escalate the process of scientific discovery and provide proper training to workers in order to build their skills for the upcoming years. According to this Executive Order, The National Institute of Standards and Technology will have a substantial role to draw a roadmap for an auxiliary system to leverage AI.
The implementation of Artificial Intelligence encourages advanced data analytics which often brings the problems related to the security of personal data to the forefront. The presence of General Data Protection Regulation (GDPR) has compelled the companies to ensure proper measures to secure personal data of their customers; otherwise, during an unwanted situation of data theft companies may have to spend 4 percent of their annual global turnover or 20 million Euros as a penalty. So, during the deployment of AI, the risk of data threat looms large, and to deal with the situation, proper implementation of regulatory acts is necessary.
From automated weapons to AI-powered drones, the defense sector has also witnessed several changes, but if an AI-driven autonomous weapon starts to malfunction or if it becomes an integral part of warfare, it may invite mass destruction. Human Rights Watch (HRW), a forum of 89 non-governmental organizations from different countries, has started a campaign to stop killer robots. Some regulations are essential to implement, not only to control the use of AI but also to make the future more secure.