Legal Knowledge Management and the Rise of Artificial Intelligence
Robotic Refactoring the Workplace
Why Your Next Insurance Claims Processor Could be a Robot
Building an AI Based Machine Learning for Global Economics
The Forgotten Element in Your Big Data Strategy
HK Bain, CEO, Digitech Systems
WiFi Networks: Shifting from Providing a Service to Improving the...
Daniel J. Strojny, Interim Associate Director of Network and IT Operations, University of St. Thomas
Breaking the Stereotypes in the Development of AI
Yves Jacquier, Executive Director, Production Studio Services, Ubisoft
Operationalize Machine Learning
Zongjie Diao, Director of Product Strategy and Management, Data Center Compute Group, CISCO
Thank you for Subscribing to CIO Applications Weekly Brief
How Will Biased Training Data Affect AI Systems?
It is imperative for organizations to establish transparency and ethical regulations in the development of AI development to garner public trust and ensure the success of AI systems.
FREMONT, CA: The implementation of artificial intelligence (AI) is changing the industrial and business landscape, bringing new and robust capabilities. However, when incorporating AI technology, organizations often neglect the ethical considerations in their drive to regain investment and boost revenue. The lack of strict regulations during the development of AI algorithms has resulted in inherent bias in intelligent systems.
As AI applications permeate everyday functions, the governing bodies must introduce transparency into the development process. Leveraging biased data when training the AI models result in biased systems, which could lead to process failure and hurt the reputation. Hence, it is advisable for organizations to thoroughly assess the AI systems before incorporating them into their workflows.
Especially when accessing the data on social media, machine learning models can develop anti-social personas based on their interactions with people. Hence, organizations need to moderate the data leveraged in the training process. Recently, an AI solution leveraged by the US criminal justice sector was found to be racially biased when scoring the risk levels of convicted felons in recommitting the crimes.
The racial bias in the AI systems has raised many questions, prominent among them addressing the transparency and auditing of the training data. Many experts have suggested the employment of independent arbitrators to evaluate the systems for bias. Corporate organizations often neglect to assess the risks when pushing their AI algorithms on the public.
AI systems are replacing human fronted processes. However, the algorithms are not regulated by the same standards as humans. The lack of oversight by both the organizations and the federal government often makes it easier for the corporate organizations to implement unaudited algorithms, handing them unchecked power in making decisions.
The US Department of Defense has decided to fund the research regarding the deconstructing of black boxes and the development of explainable artificial intelligence (XAI). It will potentially enable the cyber military units to understand their own AI and that developed by foreign governments. The online XAI aims to create more transparent models which will not only maintain a high level of learning performance but will also allow human users to understand, appropriately trust, and manage the AI systems.
Check This Out: Top Artificial Intelligence Companies