What The US’s Foggy AI Regulations Mean For Today’s Cyber Compliance

Brought to you by Renelis Mulyandari    

Tech leaders are uneasy that the US government is failing to take the initiative in the artificial intelligence arms race, due to its laissez faire approach to regulation.

Whereas jurisdictions like the EU and China are busy introducing more robust rules for AI development and stiff penalties for those who breach them, the US seems more inclined to let AI developers do their own thing, and it’s a growing concern for the vast majority of businesses.

Evidence of this comes from a recent Harris Poll survey in collaboration with Collibra, which shows an alarming lack of trust in the US government’s attitude towards AI regulation. A staggering 99% of the data management, privacy and AI specialists surveyed in the poll said they’re concerned about potential threats arising from AI that necessitate regulation.

“Without regulations, the US will lose the AI race long term,” Collibra’s co-founder and Chief Executive Felix Van de Maele said. “While AI innovation continues to advance rapidly, the lack of a regulatory framework puts content owners at risk, and ultimately will hinder the adoption of AI.”

According to the study, 84% of respondents would like to see the US government update its copyright laws to protect content creators from having their work stolen by AI, while 81% want to see laws in place that force AI companies to compensate individuals for using their data to train their AI algorithms.

But it’s not just data privacy and copyright protection at issue here, with 64% of survey respondents also citing the need for AI regulation to prevent security risks and increase safety. For instance, AI can be used to create and manage massive botnets to carry out automated distributed denial of service attacks, or sophisticated fraud campaigns. AI can potentially usher in a new breed of malware and ransomware that’s able to evolve on the fly to evade detection and mitigation. There have also been reported incidents of AI chatbbots leaking sensitive data.

Too Much Emphasis On Innovation

To date, the US government’s response to the demand for AI regulation has been less than reassuring, reflecting the country’s long-held emphasis on innovation, which traditionally comes at the expense of rigid rules and frameworks.

For one thing, the regulatory requirements today are confusing, with various competing initiatives announced, including President Joe Biden's Executive Order on the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” the Office of Science and Technology Policy’s "Blueprint for an AI Bill of Rights," and the National Institute of Standards and Technology’s "Artificial Intelligence Risk Management Framework.” While these initiatives present different types of guidelines, one common theme is that they’re all focused on the responsible development of AI, emphasizing self-regulation and voluntary compliance.

Arik Solomon, co-founder and CEO of the cyber risk and compliance automation company Cypago, argues that the US needs to strike a balance that gives companies enough room to innovate, while ensuring concrete rules are in place to ensure everyone is protected and knows what controls need to be in place to remain compliant over time.

“Regulating AI is both necessary and inevitable to ensure ethical and responsible use,” Solomon told MinuteHack. “While this may introduce complexities, it need not hinder innovation. By integrating compliance into their internal frameworks and developing policies and processes aligned with regulatory principles, companies in regulated industries can continue to grow and innovate effectively.”

But the US has so far struggled to strike this tricky balance. In practice, what it’s really doing is devolving the issue to individual states, which only adds further confusion, as evidenced by California’s own AI Bill, which has been subject to intense debate. It initially proposed a heavier-handed approach to AI regulation, only to face intense opposition from big companies like Meta Platforms and Google, which denounced it for “stifling innovation,” eventually being vetoed by Governor Gavin Newsom in September.

The US approach is in stark contrast to the path carved out by the governments of the EU and China, which have laid out clear-cut, binding rules in their own policies, complete with stiff penalties to enforce them. The EU’s AI Act is more focused on ensuring transparency and protections for users and content creators, while China has announced several policies geared towards ensuring the state has robust control over both the data and the AI models that arise from it.

For instance, the AI Act clearly outlines four major risk categories in AI – namely, minimal risk, limited risk, high risk and unacceptable risk. It lumps various AI applications into each category, with increasing prohibitions for each one. A generative AI chatbot like ChatGPT is deemed to be minimal risk and subject to very few regulations, while a system designed for subliminal manipulation to try and sway elections is said to be unacceptable and banned outright.

Going It Alone

In light of AI’s staggering pace of development and the lack of any real regulation present in the industry, US companies have little option but to try and define their own regulatory standards. As a starting point, businesses need to think about compliance. They can look at existing frameworks that regulate and govern AI development, and use these as the basis of their own AI governance, ensuring that they’ll be more or less in line with global standards.  

An example of this kind of framework might be the November 2023 Bletchley Declaration, which was agreed upon by 29 countries during the first ever global summit on AI safety. Signatories included the US, China, Australia, Germany and the UK.

In a nutshell, the Bletchley Declaration aims to balance the need for innovation with the implementation of guardrails to mitigate the risks posed by AI, and it provides a solid roadmap  for US businesses to follow.

Striking A Balance

To strike a balance between AI innovation and safety, compliance provides a good starting point. Compliance is key to cybersecurity, and it can form a strong barrier against AI-based threats. Traditional governance frameworks provide structured guidelines that allow companies to align security practices with their business objectives. As such, they can form a custom roadmap for AI regulation, enabling companies to identify threats and create strategies to mitigate them.

“AI can't function as a black box when compliance is involved,” noted Kannan Venkatraman, GenAI Services Exec and CTO at Capgemini. “At several organizations, I developed governance frameworks that foster communication across departments, regularly auditing AI’s outputs to ensure alignment with privacy and compliance policies. Finance and HR teams now co-design AI systems with legal and compliance experts, ensuring transparency and traceability.”

Compliance can be combined with a basic set of principles to follow, with a priority on fairness, accountability, privacy and transparency, which are used to guide all decisions regarding AI development.

At the same time, companies need to focus on implementing processes and tools to detect and mitigate AI bias, plus regular audits to ensure their systems are not discriminating against certain groups, vulnerable to misuse, or leaking sensitive data.

In addition, organizations must emphasize a user-centric design to AI that respects user’s privacy and personal preferences, while communicating to individuals what data they will collect and how that information will be used. AI teams should also focus on adopting a flexible and adaptive approach that allows them to adjust to evolving ethical standards and technological advances

Finally, businesses must collaborate with regulatory bodies and other organizations to ensure they remain up to date with evolving AI regulations. By doing this, they’ll have the opportunity to actively participate in the conversation and play a role in the creation of regulations governing AI development.

Responsible AI Wins the Race

By taking the initiative on responsible AI development and adopting a flexible, transparent and user-centric approach, companies will be able to benefit from the incredible pace of innovation while staying on the right side of any regulatory requirements.

They’ll be able to minimize AI security risks, protect users and content creators, and encourage the development of responsible and trusted AI systems without sacrificing their ability to innovate.

Image: Greggory DiSalvo

You Might Also Read: 

Five Best Practices For Secure & Scalable Cloud Migration:


If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.

  • Individual £5 per month or £50 per year. Sign Up
  • Multi-User, Corporate & Library Accounts Available on Request

Cyber Security Intelligence: Captured Organised & Accessible


 

« The Problem With Generative AI - Leaky Data
Meta Deletes 2 Million Fake Social Media Accounts  »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

MIRACL

MIRACL

MIRACL provides the world’s only single step Multi-Factor Authentication (MFA) which can replace passwords on 100% of mobiles, desktops or even Smart TVs.

Alvacomm

Alvacomm

Alvacomm offers holistic VIP cybersecurity services, providing comprehensive protection against cyber threats. Our solutions include risk assessment, threat detection, incident response.

ON-DEMAND WEBINAR: What Is A Next-Generation Firewall (and why does it matter)?

ON-DEMAND WEBINAR: What Is A Next-Generation Firewall (and why does it matter)?

Watch this webinar to hear security experts from Amazon Web Services (AWS) and SANS break down the myths and realities of what an NGFW is, how to use one, and what it can do for your security posture.

Practice Labs

Practice Labs

Practice Labs is an IT competency hub, where live-lab environments give access to real equipment for hands-on practice of essential cybersecurity skills.

Syxsense

Syxsense

Syxsense brings together endpoint management and security for greater efficiency and collaboration between IT management and security teams.

Vaddy

Vaddy

Vaddy provide an automatic web vulnerability scanner for DevOps that performs robust security checks to ensure that web app code is secure.

Cyber Senate

Cyber Senate

Cyber Senate is dedicated to bringing Operators of Essential Services together with global subject matter experts to address the challenges of evolving cyber threats to critical infrastructure.

Vitrociset

Vitrociset

Vitrociset design complex systems for defence, homeland security, space and transport. Activities include secure communications and cybersecurity.

Aves Netsec

Aves Netsec

Aves is a deceptive security system for enterprises who want to capture, observe and mitigate bad actors in their internal network.

Information Technology Industry Development Agency (ITIDA)

Information Technology Industry Development Agency (ITIDA)

ITIDA has two broad goals: building the capacities of Egypt’s local information and communications technology (ICT) industry and attracting foreign direct investments to boost the ICT sector.

Chronicle

Chronicle

Chronicle products combine intelligence about global threats in the wild, threats inside your network, and unique signals about both.

Rogers Cybersecure Catalyst

Rogers Cybersecure Catalyst

Rogers Cybersecure Catalyst helps Canadians and Canadian companies seize the opportunities and tackle the challenges of cybersecurity.

Korn Ferry

Korn Ferry

Korn Ferry is a global organizational consulting firm, synchronizing strategy and talent to drive superior performance for our clients in key areas including cybersecurity.

CNS Group

CNS Group

CNS Group provides industry leading cyber security though managed security services, penetration testing, consulting and compliance.

PurpleSynapz

PurpleSynapz

PurpleSynapz provides hyper-realistic Cyber Security Training with a modern curriculum and Cyber Range.

Berkeley Varitronic Systems (BVS)

Berkeley Varitronic Systems (BVS)

Berkeley Varitronics Systems is an engineering think tank delivering custom wireless RF engineering products and solutions including cyber security.

Anthony Timbers LLC

Anthony Timbers LLC

Anthony Timbers is a cybersecurity consulting and penetration testing firm providing services to the Federal and Commercial sectors nationwide.

Cyber Defence Solutions (CDS)

Cyber Defence Solutions (CDS)

Cyber Defence Solutions is a cyber and privacy Consultancy with extensive experience in the development and implementation of cyber and data security solutions to your assets.

Nitrokey

Nitrokey

Nitrokey is the world-leading company in open source security hardware. Nitrokey develops IT security hardware for data encryption, key management and user authentication.

HEROIC Cybersecurity

HEROIC Cybersecurity

HEROIC’s enterprise cybersecurity services help improve overall organizational security with industry best practices and advanced technology solutions.

Lasso Security

Lasso Security

Lasso Security is a pioneer cybersecurity company ensuring comprehensive protection for businesses leveraging generative AI and other large language model technologies.