What The US’s Foggy AI Regulations Mean For Today’s Cyber Compliance

Brought to you by Renelis Mulyandari    

Tech leaders are uneasy that the US government is failing to take the initiative in the artificial intelligence arms race, due to its laissez faire approach to regulation.

Whereas jurisdictions like the EU and China are busy introducing more robust rules for AI development and stiff penalties for those who breach them, the US seems more inclined to let AI developers do their own thing, and it’s a growing concern for the vast majority of businesses.

Evidence of this comes from a recent Harris Poll survey in collaboration with Collibra, which shows an alarming lack of trust in the US government’s attitude towards AI regulation. A staggering 99% of the data management, privacy and AI specialists surveyed in the poll said they’re concerned about potential threats arising from AI that necessitate regulation.

“Without regulations, the US will lose the AI race long term,” Collibra’s co-founder and Chief Executive Felix Van de Maele said. “While AI innovation continues to advance rapidly, the lack of a regulatory framework puts content owners at risk, and ultimately will hinder the adoption of AI.”

According to the study, 84% of respondents would like to see the US government update its copyright laws to protect content creators from having their work stolen by AI, while 81% want to see laws in place that force AI companies to compensate individuals for using their data to train their AI algorithms.

But it’s not just data privacy and copyright protection at issue here, with 64% of survey respondents also citing the need for AI regulation to prevent security risks and increase safety. For instance, AI can be used to create and manage massive botnets to carry out automated distributed denial of service attacks, or sophisticated fraud campaigns. AI can potentially usher in a new breed of malware and ransomware that’s able to evolve on the fly to evade detection and mitigation. There have also been reported incidents of AI chatbbots leaking sensitive data.

Too Much Emphasis On Innovation

To date, the US government’s response to the demand for AI regulation has been less than reassuring, reflecting the country’s long-held emphasis on innovation, which traditionally comes at the expense of rigid rules and frameworks.

For one thing, the regulatory requirements today are confusing, with various competing initiatives announced, including President Joe Biden's Executive Order on the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” the Office of Science and Technology Policy’s "Blueprint for an AI Bill of Rights," and the National Institute of Standards and Technology’s "Artificial Intelligence Risk Management Framework.” While these initiatives present different types of guidelines, one common theme is that they’re all focused on the responsible development of AI, emphasizing self-regulation and voluntary compliance.

Arik Solomon, co-founder and CEO of the cyber risk and compliance automation company Cypago, argues that the US needs to strike a balance that gives companies enough room to innovate, while ensuring concrete rules are in place to ensure everyone is protected and knows what controls need to be in place to remain compliant over time.

“Regulating AI is both necessary and inevitable to ensure ethical and responsible use,” Solomon told MinuteHack. “While this may introduce complexities, it need not hinder innovation. By integrating compliance into their internal frameworks and developing policies and processes aligned with regulatory principles, companies in regulated industries can continue to grow and innovate effectively.”

But the US has so far struggled to strike this tricky balance. In practice, what it’s really doing is devolving the issue to individual states, which only adds further confusion, as evidenced by California’s own AI Bill, which has been subject to intense debate. It initially proposed a heavier-handed approach to AI regulation, only to face intense opposition from big companies like Meta Platforms and Google, which denounced it for “stifling innovation,” eventually being vetoed by Governor Gavin Newsom in September.

The US approach is in stark contrast to the path carved out by the governments of the EU and China, which have laid out clear-cut, binding rules in their own policies, complete with stiff penalties to enforce them. The EU’s AI Act is more focused on ensuring transparency and protections for users and content creators, while China has announced several policies geared towards ensuring the state has robust control over both the data and the AI models that arise from it.

For instance, the AI Act clearly outlines four major risk categories in AI – namely, minimal risk, limited risk, high risk and unacceptable risk. It lumps various AI applications into each category, with increasing prohibitions for each one. A generative AI chatbot like ChatGPT is deemed to be minimal risk and subject to very few regulations, while a system designed for subliminal manipulation to try and sway elections is said to be unacceptable and banned outright.

Going It Alone

In light of AI’s staggering pace of development and the lack of any real regulation present in the industry, US companies have little option but to try and define their own regulatory standards. As a starting point, businesses need to think about compliance. They can look at existing frameworks that regulate and govern AI development, and use these as the basis of their own AI governance, ensuring that they’ll be more or less in line with global standards.  

An example of this kind of framework might be the November 2023 Bletchley Declaration, which was agreed upon by 29 countries during the first ever global summit on AI safety. Signatories included the US, China, Australia, Germany and the UK.

In a nutshell, the Bletchley Declaration aims to balance the need for innovation with the implementation of guardrails to mitigate the risks posed by AI, and it provides a solid roadmap  for US businesses to follow.

Striking A Balance

To strike a balance between AI innovation and safety, compliance provides a good starting point. Compliance is key to cybersecurity, and it can form a strong barrier against AI-based threats. Traditional governance frameworks provide structured guidelines that allow companies to align security practices with their business objectives. As such, they can form a custom roadmap for AI regulation, enabling companies to identify threats and create strategies to mitigate them.

“AI can't function as a black box when compliance is involved,” noted Kannan Venkatraman, GenAI Services Exec and CTO at Capgemini. “At several organizations, I developed governance frameworks that foster communication across departments, regularly auditing AI’s outputs to ensure alignment with privacy and compliance policies. Finance and HR teams now co-design AI systems with legal and compliance experts, ensuring transparency and traceability.”

Compliance can be combined with a basic set of principles to follow, with a priority on fairness, accountability, privacy and transparency, which are used to guide all decisions regarding AI development.

At the same time, companies need to focus on implementing processes and tools to detect and mitigate AI bias, plus regular audits to ensure their systems are not discriminating against certain groups, vulnerable to misuse, or leaking sensitive data.

In addition, organizations must emphasize a user-centric design to AI that respects user’s privacy and personal preferences, while communicating to individuals what data they will collect and how that information will be used. AI teams should also focus on adopting a flexible and adaptive approach that allows them to adjust to evolving ethical standards and technological advances

Finally, businesses must collaborate with regulatory bodies and other organizations to ensure they remain up to date with evolving AI regulations. By doing this, they’ll have the opportunity to actively participate in the conversation and play a role in the creation of regulations governing AI development.

Responsible AI Wins the Race

By taking the initiative on responsible AI development and adopting a flexible, transparent and user-centric approach, companies will be able to benefit from the incredible pace of innovation while staying on the right side of any regulatory requirements.

They’ll be able to minimize AI security risks, protect users and content creators, and encourage the development of responsible and trusted AI systems without sacrificing their ability to innovate.

Image: Greggory DiSalvo

You Might Also Read: 

Five Best Practices For Secure & Scalable Cloud Migration:


If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.

  • Individual £5 per month or £50 per year. Sign Up
  • Multi-User, Corporate & Library Accounts Available on Request

Cyber Security Intelligence: Captured Organised & Accessible


 

« The Problem With Generative AI - Leaky Data
Meta Deletes 2 Million Fake Social Media Accounts  »

Infosecurity Europe
CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

Authentic8

Authentic8

Authentic8 transforms how organizations secure and control the use of the web with Silo, its patented cloud browser.

XYPRO Technology

XYPRO Technology

XYPRO is the market leader in HPE Non-Stop Security, Risk Management and Compliance.

BackupVault

BackupVault

BackupVault is a leading provider of automatic cloud backup and critical data protection against ransomware, insider attacks and hackers for businesses and organisations worldwide.

CSI Consulting Services

CSI Consulting Services

Get Advice From The Experts: * Training * Penetration Testing * Data Governance * GDPR Compliance. Connecting you to the best in the business.

Alvacomm

Alvacomm

Alvacomm offers holistic VIP cybersecurity services, providing comprehensive protection against cyber threats. Our solutions include risk assessment, threat detection, incident response.

ComSec LLC

ComSec LLC

ComSec perform threat assessments to identify vulnerabilities and help protect businesses against corporate espionage via electronic eavesdropping.

MIIS Cyber Initiative

MIIS Cyber Initiative

The Cyber Initiative's mission is to assess the impact of the information age on security, peace and communications.

Lawley Insurance

Lawley Insurance

Lawley is a full-service, independent insurance agency. Specialty insurance products include Cyber Security.

Massive Alliance

Massive Alliance

Massive is a global service agency providing internet monitoring, data & security threat surveillance and reputation management.

Salient CRGT

Salient CRGT

Salient CRGT is a leading provider of health, data analytics, cloud, agile software development, mobility, cyber security, and infrastructure solutions.

CybeReady

CybeReady

CybeReady’s Autonomous Platform offers continuous adaptive training to all employees and guarantees significant reduction in organizational risk of phishing attacks.

MrLooquer

MrLooquer

MrLooquer provide a solution to automatically discover the assets of organizations on the internet, determine the level of exposure to attacks and help to manage risk accurately.

Fischer Identity

Fischer Identity

Fischer Identity provide identity & access management and identity governance administration solutions.

Munich Re

Munich Re

Munich Re is a leading global provider of reinsurance, primary insurance and insurance-related risk solutions including Cyber.

Enzoic

Enzoic

Enzoic is an enterprise-focused cybersecurity company committed to preventing account takeover and fraud through compromised credential detection.

CYMOTIVE Technologies

CYMOTIVE Technologies

Combining Israeli cyber innovation with a century of German automotive engineering. CYMOTIVE operates under the assumption that connectivity is a game changer for the automotive industry.

FortifyIQ

FortifyIQ

FortifyIQ's mission is to advance maximum security against side-channel attacks across the entire computing spectrum.

OneLayer

OneLayer

OneLayer provide enterprise grade security dedicated for private LTE/5G networks. We ensure that the best IoT security toolkit is implemented in your cellular environment.

Nclose

Nclose

Nclose is a proudly South African cyber security specialist that has been securing leading enterprises and building our security portfolio since 2006.

Security Discovery

Security Discovery

Stay ahead of cyber threats with Security Discovery. We offer expert consulting, comprehensive services, and a powerful vulnerability monitoring SaaS platform.

2021.AI

2021.AI

2021.AI serves the growing business need for full oversight and management of applied AI.