Will Artificial Intelligence Destroy Humanity
Cognitive Automation and the Disruption of Business Services
Legal Knowledge Management and the Rise of Artificial Intelligence
UC Health Explores Data Science to Help Improve Operational...
The Difficult Road to a More Secure Future
Mark Raymond, CIO, State of Connecticut
Visualizing Big Data to get the Big Picture
Stephen Skinner, CIO, First Team Real Estate
Artificial Intelligence - A Transformational Journey
Gail Evans, Global CIO, Mercer
Adopting Big Data Tools Helps In Decision Making
Ellen Watson, CIO, University of Memphis
The Increasing Prominence of AI
The hype around AI, fed by media and massive investments into the AI domain, is justified because the technology has proved applicable to processes that were historically deemed hard to automate. But the AI field is also characterized by a technological polarization wherein data and resources to fully explore AI are only available with few major players. Other problems of the AI sector are discrepancies between AI's actual capacity vs. people's expectations or assumptions about AI, and short-term business goals being valued more than long-term artificial general intelligence (AGI) research.
Humans and machines are mutually influencing each other, and with time, the control and charge have transferred considerably from humans to machines. According to Vinge (1993), we are not on the verge of the Singularity (a superintelligence exceeding human intelligence). The law of accelerating returns proposed by futurist Raymond Kurzweil outlines the human progress as S-shaped. Kurzweil has utilized Moore's law to calculate 2030 to be the year of reaching an AGI capable of 10¹⁶ calculations per second (cps) with 10¹³ bits of memory, and 2045 to be the dawn of the singularity.
Machines today have partial intelligence and lack consciousness, common sense, and memory of previous failures. However, machines exhibit problem-solving and a higher magnitude of accuracy, reliable storage, and raw computational power. Humans have creativity with the ability to more efficiently extrapolate and apply information to novel situations. Skills essential for survival are easier for humans to execute than computers, as stated by Moravec's paradox.
Hybridization, security, and toughness are deemed essential for advancing towards AGI. An AI engine also requires validation, verification, securing, and control as per Russell et al. (2015). Creation of AI may or may not utilize humans for feedback or decisions about the final design. Self-repair, self-replication, and access to energy should be minimum in AI, according to Markov. Advances in AI are on their way, and the application of resources for AI's positive use are needed.