What The US’s Foggy AI Regulations Mean For Today’s Cyber Compliance

Brought to you by Renelis Mulyandari    

Tech leaders are uneasy that the US government is failing to take the initiative in the artificial intelligence arms race, due to its laissez faire approach to regulation.

Whereas jurisdictions like the EU and China are busy introducing more robust rules for AI development and stiff penalties for those who breach them, the US seems more inclined to let AI developers do their own thing, and it’s a growing concern for the vast majority of businesses.

Evidence of this comes from a recent Harris Poll survey in collaboration with Collibra, which shows an alarming lack of trust in the US government’s attitude towards AI regulation. A staggering 99% of the data management, privacy and AI specialists surveyed in the poll said they’re concerned about potential threats arising from AI that necessitate regulation.

“Without regulations, the US will lose the AI race long term,” Collibra’s co-founder and Chief Executive Felix Van de Maele said. “While AI innovation continues to advance rapidly, the lack of a regulatory framework puts content owners at risk, and ultimately will hinder the adoption of AI.”

According to the study, 84% of respondents would like to see the US government update its copyright laws to protect content creators from having their work stolen by AI, while 81% want to see laws in place that force AI companies to compensate individuals for using their data to train their AI algorithms.

But it’s not just data privacy and copyright protection at issue here, with 64% of survey respondents also citing the need for AI regulation to prevent security risks and increase safety. For instance, AI can be used to create and manage massive botnets to carry out automated distributed denial of service attacks, or sophisticated fraud campaigns. AI can potentially usher in a new breed of malware and ransomware that’s able to evolve on the fly to evade detection and mitigation. There have also been reported incidents of AI chatbbots leaking sensitive data.

Too Much Emphasis On Innovation

To date, the US government’s response to the demand for AI regulation has been less than reassuring, reflecting the country’s long-held emphasis on innovation, which traditionally comes at the expense of rigid rules and frameworks.

For one thing, the regulatory requirements today are confusing, with various competing initiatives announced, including President Joe Biden's Executive Order on the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” the Office of Science and Technology Policy’s "Blueprint for an AI Bill of Rights," and the National Institute of Standards and Technology’s "Artificial Intelligence Risk Management Framework.” While these initiatives present different types of guidelines, one common theme is that they’re all focused on the responsible development of AI, emphasizing self-regulation and voluntary compliance.

Arik Solomon, co-founder and CEO of the cyber risk and compliance automation company Cypago, argues that the US needs to strike a balance that gives companies enough room to innovate, while ensuring concrete rules are in place to ensure everyone is protected and knows what controls need to be in place to remain compliant over time.

“Regulating AI is both necessary and inevitable to ensure ethical and responsible use,” Solomon told MinuteHack. “While this may introduce complexities, it need not hinder innovation. By integrating compliance into their internal frameworks and developing policies and processes aligned with regulatory principles, companies in regulated industries can continue to grow and innovate effectively.”

But the US has so far struggled to strike this tricky balance. In practice, what it’s really doing is devolving the issue to individual states, which only adds further confusion, as evidenced by California’s own AI Bill, which has been subject to intense debate. It initially proposed a heavier-handed approach to AI regulation, only to face intense opposition from big companies like Meta Platforms and Google, which denounced it for “stifling innovation,” eventually being vetoed by Governor Gavin Newsom in September.

The US approach is in stark contrast to the path carved out by the governments of the EU and China, which have laid out clear-cut, binding rules in their own policies, complete with stiff penalties to enforce them. The EU’s AI Act is more focused on ensuring transparency and protections for users and content creators, while China has announced several policies geared towards ensuring the state has robust control over both the data and the AI models that arise from it.

For instance, the AI Act clearly outlines four major risk categories in AI – namely, minimal risk, limited risk, high risk and unacceptable risk. It lumps various AI applications into each category, with increasing prohibitions for each one. A generative AI chatbot like ChatGPT is deemed to be minimal risk and subject to very few regulations, while a system designed for subliminal manipulation to try and sway elections is said to be unacceptable and banned outright.

Going It Alone

In light of AI’s staggering pace of development and the lack of any real regulation present in the industry, US companies have little option but to try and define their own regulatory standards. As a starting point, businesses need to think about compliance. They can look at existing frameworks that regulate and govern AI development, and use these as the basis of their own AI governance, ensuring that they’ll be more or less in line with global standards.  

An example of this kind of framework might be the November 2023 Bletchley Declaration, which was agreed upon by 29 countries during the first ever global summit on AI safety. Signatories included the US, China, Australia, Germany and the UK.

In a nutshell, the Bletchley Declaration aims to balance the need for innovation with the implementation of guardrails to mitigate the risks posed by AI, and it provides a solid roadmap  for US businesses to follow.

Striking A Balance

To strike a balance between AI innovation and safety, compliance provides a good starting point. Compliance is key to cybersecurity, and it can form a strong barrier against AI-based threats. Traditional governance frameworks provide structured guidelines that allow companies to align security practices with their business objectives. As such, they can form a custom roadmap for AI regulation, enabling companies to identify threats and create strategies to mitigate them.

“AI can't function as a black box when compliance is involved,” noted Kannan Venkatraman, GenAI Services Exec and CTO at Capgemini. “At several organizations, I developed governance frameworks that foster communication across departments, regularly auditing AI’s outputs to ensure alignment with privacy and compliance policies. Finance and HR teams now co-design AI systems with legal and compliance experts, ensuring transparency and traceability.”

Compliance can be combined with a basic set of principles to follow, with a priority on fairness, accountability, privacy and transparency, which are used to guide all decisions regarding AI development.

At the same time, companies need to focus on implementing processes and tools to detect and mitigate AI bias, plus regular audits to ensure their systems are not discriminating against certain groups, vulnerable to misuse, or leaking sensitive data.

In addition, organizations must emphasize a user-centric design to AI that respects user’s privacy and personal preferences, while communicating to individuals what data they will collect and how that information will be used. AI teams should also focus on adopting a flexible and adaptive approach that allows them to adjust to evolving ethical standards and technological advances

Finally, businesses must collaborate with regulatory bodies and other organizations to ensure they remain up to date with evolving AI regulations. By doing this, they’ll have the opportunity to actively participate in the conversation and play a role in the creation of regulations governing AI development.

Responsible AI Wins the Race

By taking the initiative on responsible AI development and adopting a flexible, transparent and user-centric approach, companies will be able to benefit from the incredible pace of innovation while staying on the right side of any regulatory requirements.

They’ll be able to minimize AI security risks, protect users and content creators, and encourage the development of responsible and trusted AI systems without sacrificing their ability to innovate.

Image: Greggory DiSalvo

You Might Also Read: 

Five Best Practices For Secure & Scalable Cloud Migration:


If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.

  • Individual £5 per month or £50 per year. Sign Up
  • Multi-User, Corporate & Library Accounts Available on Request

Cyber Security Intelligence: Captured Organised & Accessible


 

« The Problem With Generative AI - Leaky Data
Meta Deletes 2 Million Fake Social Media Accounts  »

ManageEngine
CyberSecurity Jobsite
Check Point

Directory of Suppliers

DigitalStakeout

DigitalStakeout

DigitalStakeout enables cyber security professionals to reduce cyber risk to their organization with proactive security solutions, providing immediate improvement in security posture and ROI.

NordLayer

NordLayer

NordLayer is an adaptive network access security solution for modern businesses — from the world’s most trusted cybersecurity brand, Nord Security. 

Directory of Cyber Security Suppliers

Directory of Cyber Security Suppliers

Our Supplier Directory lists 8,000+ specialist cyber security service providers in 128 countries worldwide. IS YOUR ORGANISATION LISTED?

XYPRO Technology

XYPRO Technology

XYPRO is the market leader in HPE Non-Stop Security, Risk Management and Compliance.

Practice Labs

Practice Labs

Practice Labs is an IT competency hub, where live-lab environments give access to real equipment for hands-on practice of essential cybersecurity skills.

Centre for Cyber Security (CFCS) - Denmark

Centre for Cyber Security (CFCS) - Denmark

The Centre for Cyber Security is the Danish national IT security authority, Network Security Service and Centre for Excellence within cyber security.

Nozomi Networks

Nozomi Networks

Nozomi Networks is a leader in Industrial Control System (ICS) cybersecurity, with a comprehensive platform to deliver real-time cybersecurity and operational visibility.

Cyber Security Audit Corp (C3SA)

Cyber Security Audit Corp (C3SA)

C3SA specializes in architecting, operating, managing and improving defensible and resilient IT infrastructures for Canada's public and private sectors.

Infortec

Infortec

Infortec provide consultancy and solutions for the protection of digital information and the management of computer resources.

Diateam

Diateam

Diateam is an R&D company specializing in computer security. Diateam develops highly innovative cyber range platforms and Industry-leading systems for cybersecurity training and testing labs.

Kinnami Software

Kinnami Software

Kinnami is a data security company that equips organizations with the tools they need to secure and protect highly confidential documents and data.

Foretrace

Foretrace

Foretrace aims to prevent, assess, and contain the exposure of customer accounts, domains, and systems to malicious actors.

Binare

Binare

Binare empowers companies all over the world to improve their IIot/IoT /Embedded cybersecurity posture and digital privacy.

Interos

Interos

Interos is the operational resilience company — reinventing how companies manage their supply chains and business relationships — through a breakthrough AI SaaS platform.

Vircom

Vircom

With a large majority of cyber attacks starting with email, Vircom provides protection against the worst email security threats to your business.

PagerDuty

PagerDuty

PagerDuty is the central nervous system for a company’s digital operations. We identify issues in real-time and bring together the right people to respond to problems faster.

Dropzone AI

Dropzone AI

Dropzone AI are creating a generational leap in SecOps by using AI to automate cyber expertise and tooling.

Keyrus

Keyrus

Keyrus is a global consultancy that develops data and digital solutions for performance management.

BeamSec

BeamSec

BeamSec is a cybersecurity solutions provider committed to addressing the human element of risk against the evolving landscape of email-based cyber threats.

Axoflow

Axoflow

Axoflow helps organizations to consolidate their existing solutions for logs, metrics, and traces, and evolve them into a cloud native observability infrastructure.

Inoxoft

Inoxoft

Inoxoft delivers IT security consulting, assessment, and protection services to help businesses secure their infrastructure, applications, and sensitive data.