Preventing The Hacked AI Apocalypse

Attacks are an increasingly worrisome threat to the performance of artificial intelligence applications.

If an attacker can introduce nearly invisible alterations to image, video, speech, and other data for the purpose of fooling AI-powered classification tools, it will be difficult to trust this otherwise sophisticated technology to do its job effectively.

Imagine how such attacks could undermine AI-powered autonomous vehicles ability to recognise obstacles, content filters’ effectiveness in blocking disturbing images, or in access systems’ ability to deter unauthorized entry.

Some people argue that adversarial threats stem from deep flaws in the neural net technology that powers today’s AI. After all, it’s well-understood that many machine learning algorithms are vulnerable to adversarial attacks.

However, you could just as easily argue that this problem calls attention to weaknesses in enterprise processes for building, training, deploying, and evaluating AI models.

None of these issues are news to AI experts. There is even a Kaggle competition focused right now on fending off adversarial AI.

It’s true that the AI community lacks any clear consensus on best practices for building anti-adversarial defenses into deep neural networks. But from what I see in the research literature and industry discussions, the core approaches from which such a framework will emerge are already crystallising.

Going forward, AI developers will need to follow these guidelines to build anti-adversarial protections into their applications:

Assume the possibility of adversarial attacks on all in-production AI assets

As AI is deployed everywhere, developers need to assume that their applications will be high-profile sitting ducks for adversarial manipulation.

AI exists to automate cognition, perception, and other behaviors that, if they produce desirable results, might merit the praise one normally associates with “intelligence.”

However, AI’s adversarial vulnerabilities might result in cognition, perception, and other behaviors, perhaps far worse than any normal human being would have exhibited under the circumstances.

Perform an adversarial risk assessment prior to initiating AI development

Upfront and throughout the life cycle of their AI apps, developers should frankly assess their projects’ vulnerability to adversarial attacks.

As noted in a 2015 research paper published by the IEEE, developers should weigh the possibility of unauthorised parties gaining direct access to key elements of the AI project, including the neural net architecture, training data, hyper-parameters, learning methodology, and loss function being used.

Alternatively, the paper shows, an attacker might be able to collect a surrogate dataset from the same source or distribution as the training data used to optimize an AI neural net model. This could provide the adversary with insights into what type of ersatz input data might fool a classifier model that was built with the targeted deep neural net.

In another attack approach described by the paper, even when the adversary lacks direct visibility into the targeted neural net and associated training data, attackers could exploit tactics that let them observe “the relationship between changes in inputs and outputs … to adaptively craft adversarial samples.”

Generate adversarial examples as a standard activity in the AI training pipeline

AI developers should immerse themselves in the growing body of research on the many ways in which subtle adversarial alterations may be introduced.

Data scientists should avail themselves of the growing range of open source tools, for generating adversarial examples to test the vulnerability of CNNs and other AI models. More broadly, developers should consider the growing body of basic research including those that aren’t directly focused on fending off cybersecurity attacks.

Recognise the need to rely on both human curators and algorithmic discriminators of adversarial examples

The effectiveness of an adversarial attack depends on its ability to fool your AI apps’ last line of defense.

Adversarial manipulation of an image might be obvious to the naked eye but still somehow fool a CNN into misclassifying it. Conversely, a different manipulation might be too subtle for a human curator to detect, but a well-trained discriminator algorithm in GAN may be able to pick it out without difficulty.

Build ensemble models that use a range of AI algorithms for detecting adversarial examples

Some algorithms may be more sensitive than others to the presence of adversary-tampered images and other data objects. For example, a scenario in which a shallow classifier algorithm might detect adversarial images better than a deeper-layered CNN. They also found that some algorithms are best suited for detecting manipulations across an entire image, while others may be better at finding subtle fabrications in one small section of an image.

One approach for immunizing CNNs from these attacks might be to add what Cornell University researcher Arild Nøkland calls an “adversarial gradient” to the back-propagation of weights during an AI model’s training process. It would be prudent for data science teams to test the relative adversary-detection advantages of different algorithms using ongoing A/B testing both in development and production environments.

Reuse adversarial-defense knowledge to improve AI resilience against bogus input examples

As noted in a 2016 research paper published by the IEEE, data scientists can use transfer-learning techniques to reduce the sensitivity of a CNN or other model to adversarial alterations in input images.

Whereas traditional transfer learning involves applying statistical knowledge from an existing model to a different one, the paper discusses how a model’s existing knowledge, gained through training on a valid data set, might be “distilled” to spot adversarial alterations.

According to the authors, “we use defensive distillation to smooth the model learned by a, distributed neural net, architecture during training by helping the model generalise better to samples outside of its training dataset.”

The result is that a model should be better able to recognise the difference between adversarial examples (those that resemble examples in its training set) and non-adversarial examples (those that may deviate significantly from those in its training set).

Without these practices as a standard part of their methodology, data scientists might inadvertently bake automated algorithmic gullibility into their neural networks.

As our lives increasingly rely on AI to do the smart thing in all circumstances, these adversarial vulnerabilities might prove catastrophic. That’s why it’s essential that data scientists and AI developers put in place suitable safeguards to govern how AI apps are developed, training, and managed.

Infoworld

You Might Also Read: 

A Revolution In Warfare Made Possible By AI:

Using AI In Business Intelligence:

« Equifax Executives Resign Without Charge
Kaspersky Says We Can Trust Him »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

XYPRO Technology

XYPRO Technology

XYPRO is the market leader in HPE Non-Stop Security, Risk Management and Compliance.

ManageEngine

ManageEngine

As the IT management division of Zoho Corporation, ManageEngine prioritizes flexible solutions that work for all businesses, regardless of size or budget.

Authentic8

Authentic8

Authentic8 transforms how organizations secure and control the use of the web with Silo, its patented cloud browser.

ZenGRC

ZenGRC

ZenGRC (formerly Reciprocity) is a leader in the GRC SaaS landscape, offering robust and intuitive products designed to make compliance straightforward and efficient.

IT Governance

IT Governance

IT Governance is a leading global provider of information security solutions. Download our free guide and find out how ISO 27001 can help protect your organisation's information.

Ipsidy

Ipsidy

Our identity platform enables mobile users to more easily authenticate their identity to a mobile phone or portable device of their choosing.

Raytheon Technologies

Raytheon Technologies

Raytheon Intelligence & Space delivers solutions that protect every side of cyber for government agencies, businesses and nations.

herdProtect

herdProtect

herdProtect is a second line of defense malware scanning platform powered by 68 anti-malware engines in the cloud.

Sage Designs

Sage Designs

Sage Designs is a provider of SCADA, Security & Industrial Automation products and training programs.

Teramind

Teramind

Teramind provides a user-centric security approach to monitor employee behavior in order to identify suspicious activity, detect possible threats, monitor efficiency, and ensure industry compliance.

Verafin

Verafin

Verafin is one of the North American leaders in fraud detection and AML software.

Kingsley Napley

Kingsley Napley

Cyber crime is an area of growing legal complexity. Our team of cyber crime lawyers have vast experience of the law in this area.

Digi International

Digi International

Digi is a leading global provider of mission-critical and business-critical machine-to-machine (M2M) and Internet of Things (IoT) connectivity products and services.

SecuLetter

SecuLetter

SecuLetter is able to detect unknown attacks with hybrid approaches, static and dynamic analysis.

1Touch.io

1Touch.io

1touch.io Inventa is an AI-based, sustainable data discovery and classification platform that provides automated, near real-time discovery, mapping, and cataloging of all sensitive data.

Match Systems

Match Systems

Match Systems provides blockchain investigations, KYC, KYT, AML, Due Diligence and compliance services.

Techmentum

Techmentum

At Techmentum, our mission is to utilize technology to help companies succeed. Our expertise includes fully managed IT services, cybersecurity, cloud, and custom technology solutions.

FluidOne

FluidOne

FluidOne are an award-winning Connected Cloud Solutions provider. We design tailored solutions to help customers and partners digitally transform their IT and communications.

TrueBees

TrueBees

TrueBees is the first deepfakes detector able to detect AI-generated portraits shared on social media and to prevent their diffusion across the web.

Corgea

Corgea

Corgea is AI-powered security platform that finds, triages and fixes your insecure code.

Cyberverse Foundation

Cyberverse Foundation

Cyberverse Foundation is an organization dedicated to building a robust cybersecurity ecosystem in India.