Thank you for Subscribing to CIO Applications Weekly Brief

Adversa AI Red Team Introduces Technology for Ethical Hacking of Facial Recognition Systems

The Adversa AI Red Team has performed a proof-of-concept attack on PimEyes, the most popular and advanced public picture face search engine.
FREMONT, CA: Adversa AI, a trusted AI research leader, has presented a novel attack method for AI facial recognition applications. It causes an AI-driven facial recognition algorithm to misidentify people by introducing subtle alterations in human faces. Compared to previous similar approaches, this method is portable across all AI models while also being far more precise, stealthy, and resilient.
The Adversa AI Red Team has performed a proof-of-concept attack on PimEyes, the most popular and advanced public picture face search engine. Clearview, a commercial facial recognition database sold to law enforcement and governments, is likewise similar. Unfortunately, PimEyes duped, and the CEO of Adversa was mistaken for Elon Musk in the photo.
The attack is unique because it is a black-box assault created without a thorough knowledge of the search engine's algorithms. As a result, the vulnerability may get utilized with a variety of facial recognition engines. Because the attack allows malefactors to disguise themselves in various ways, we've given it the name Adversarial Octopus, a reference to the animal's stealth, precision, and adaptability.
The existence of such flaws in AI systems, particularly facial recognition engines, could have disastrous implications and be utilized in poisoning and evasion scenarios like the ones below:
- Hacktivists could cause havoc in AI-powered internet platforms that employ facial attributes as input for any judgments or further training. In addition, by changing their profile images, attackers can poison or bypass the algorithms of large Internet corporations.
- In banks, trading platforms, and other services that provide verified remote help, cybercriminals can steal human identities and evade AI-driven biometric authentication or identity verification systems. In every case where classic deepfakes can be helpful, this attack can be even more subtle.
- Dissidents use it to conceal their online activities in social media from police enforcement. The virtual world we now live in resembles a mask or a fake ID.
Adversa AI recently produced a world-first analytical analysis based on a decade of growing activity in the field of Secure and Trusted AI. We built our technique for testing facial recognition systems for such attacks in response to demand for practical solutions for ensuring AI system security against advanced adversarial assaults. We're searching for early adopters and forward-thinking technology businesses to collaborate with us on integrating negative testing capabilities into your SDLC and MLLC capabilities to improve trust in your AI apps and give best-of-breed solutions to your consumers.
I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info
Featured Vendors
-
Jason Vogel, Senior Director of Product Strategy & Development, Silver Wealth Technologies
James Brown, CEO, Smart Communications
Deepak Dube, Founder and CEO, Datanomers
Tory Hazard, CEO, Institutional Cash Distributors
Jean Jacques Borno, CFP®, Founder & CEO, 1787fp
-
Andrew Rudd, CEO, Advisor Software
Douglas Jones, Vice President Operations, NETSOL Technologies
Matt McCormick, CEO, AddOn Networks
Jeff Peters, President, and Co-Founder, Focalized Networks
Tom Jordan, VP, Financial Software Solutions, Digital Check Corp
Tracey Dunlap, Chief Experience Officer, Zenmonics