Don't Leave AI Governance To The Machines

Many companies are entrusting their top business-critical operations and decisions to artificial intelligence.

Rather than traditional, rule-based programming, users now have the ability to provide machine data, define outcomes, and let it create its own algorithms and provide recommendations to the business. For instance, an auto insurance company can feed a machine a library of photos of previous totaled cars with data on their make, model and payout. 

The system can then be “trained” to review future incidents, determine if a car is totaled, and give a recommended payout amount. This streamlines the review process, which is both a positive for the company and customer.

With the ability for AI to arrive at its own conclusions, governance over the machines is critical for the sake of business executives and customers alike. 

Was the machine accurate in its review of the accident photos? Was the customer paid the right amount? 
By taking the proper measures, organisations can gain clarity and ensure they are using these tools responsibly and to everyone’s benefit.  Here are three areas to keep in mind. 

Traceability sheds light on machine reasoning and logic 
In a recent Genpact study of C-suite and other senior executives, 63 percent of respondents said that they find it important to be able to trace an AI-enabled machine’s reasoning path. After all, traceability helps with articulating decisions to customers, such as in a loan approval.

Traceability is also critical for compliance and meeting regulatory requirements, especially with the implementation of the General Data Protection Regulation (GDPR) in Europe, which has affected practically every global company today. 
One critical GDPR requirement is that any organisation using automation in decision-making must disclose the logic involved in the processing to the data subject. Without traceability, companies can struggle to communicate the machine’s logic and face penalties from regulatory bodies.

The right controls and human intervention remain paramount 
By design, AI enables enterprises to review large datasets and delivers intelligence to facilitate decisions at far greater scale and speed than humanly possible. However, organisations cannot leave these systems to run in autopilot. There needs to be command and control by humans. 

For example, a social media platform can use natural language processing to review users’ posts for warning signs of gun violence or suicidal thoughts. The system can comb through billions of posts and connect the dots–which would be impossible for even the largest team of staff–and alert customer agents. Not every post that will be a legitimate concern so it is up to humans to verify what the machine picked up. 

This case highlights why people are still critical in the AI-driven future, as only we possess domain knowledge, business, industry, and customer intelligence acquired through experience–to validate the machine’s reasoning.

Command and control is also necessary to ensure algorithms are not being fooled or malfunctioning. For example, machines trained to identify certain types of images, such as for determining if a car is totaled for insurance purposes, can be fooled by feeding completely different images that have inherently the same pixel patterns. Why? Because the machine is analyzing the photos based on patterns, and not looking at them in the same context that human beings do.

Beware of unintentional human biases within data 
Since AI-enabled machines constantly absorb data and information, it is highly likely for biases or unwanted outcomes to emerge, such as a Chatbot that picks up inappropriate or violent language from interactions over time. However, if there is bias in the data going in, then there will be bias in what the system puts out. 

Beforehand, individual users with domain knowledge have to review the data that goes into these machines to prevent possible biases and then maintain governance to make sure that none emerges over time. 

With more visibility, understanding of their data and governance over AI, companies can proactively assess the machine’s business rules or acquired patterns before they are adopted and rolled out across the enterprise and to customers. At its root, responsible use of AI is all about trust. Companies, customers, and regulatory agencies want to trust that these intelligent systems are processing information and feeding back recommendations in the right fashion. They want to be clear that the business outcomes created by these machines are in everyone’s best interest. 

By applying the various techniques discussed above, organisations can strengthen this trust with better understanding of the AI’s reasoning path, communication of decisions to customers, regulatory compliance, and command and control to ensure that they have clarity and can always make the best decisions.

Information Week

You Might Also Read: 

Computer Says No:

AI Can Win At Poker But Who Is Overseeing Computer Ethics?:
 

 

« For Sale: Access To Airport Security
Putin Says Russia The Target Of 25m World Cup Cyber Attacks »

Infosecurity Europe
CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

Resecurity

Resecurity

Resecurity is a cybersecurity company that delivers a unified platform for endpoint protection, risk management, and cyber threat intelligence.

Infosecurity Europe, 3-5 June 2025, ExCel London

Infosecurity Europe, 3-5 June 2025, ExCel London

This year, Infosecurity Europe marks 30 years of bringing the global cybersecurity community together to further our joint mission of Building a Safer Cyber World.

IT Governance

IT Governance

IT Governance is a leading global provider of information security solutions. Download our free guide and find out how ISO 27001 can help protect your organisation's information.

Clayden Law

Clayden Law

Clayden Law advise global businesses that buy and sell technology products and services. We are experts in information technology, data privacy and cybersecurity law.

XYPRO Technology

XYPRO Technology

XYPRO is the market leader in HPE Non-Stop Security, Risk Management and Compliance.

Landry & Associates

Landry & Associates

Landry & Associates is a multidisciplinary firm specializing in risk management, performance and technology management.

SafeUM Communications

SafeUM Communications

SafeUM Secure Messenger is an encrypted secure communications protection mechanism for instant messaging.

Security Weekly

Security Weekly

Security Weekly provides free content within the subject areas of IT security news, vulnerabilities, hacking, and research.

Security University

Security University

Security University is a leading provider of Qualified Hands-On Cybersecurity Education, Information Assurance Training and Certifications for IT and Security Professionals.

Assertion

Assertion

Assertion secures your collaboration (UC/CC) systems from cyber risks. Enforcing the right set of controls and monitoring them continually brings down risk to acceptable levels.

Aptiv

Aptiv

Aptiv is a global technology company that develops safer, greener and more connected solutions enabling the future of mobility.

2Keys

2Keys

2Keys designs, deploys and operates Digital Identity Platforms and Cyber Security Platforms through Managed Service and Professional Service engagements.

Bessemer Venture Partners (BVP)

Bessemer Venture Partners (BVP)

Bessemer Venture Partners was born from innovations that literally forged modern building and manufacturing. Today, our team of investors works with people who want to create revolutions of their own.

Automox

Automox

Remediate vulnerabilities 30X faster than the industry norm – and dramatically reduce your risk with simple, fast, and cloud-native endpoint hardening from Automox.

Center for Medical Device Cybersecurity (CMDC) - University of Minnesota

Center for Medical Device Cybersecurity (CMDC) - University of Minnesota

CMDC’s mission is to foster university-industry-government partnerships to assure that medical devices are safe and secure from cybersecurity threats.

Support Link Technologies (SLT)

Support Link Technologies (SLT)

Support Link Technologies are an IT Solutions Company committed to achieving customer satisfaction through excellent customer service.

Nine23

Nine23

Nine23 are a highly focused cyber security solutions company that defines, builds and manages innovative services, enabling end-users to use technology securely in today’s workplace.

SecurityGen

SecurityGen

SecurityGen is a global cybersecurity start-up focused on telecom security, with a focus on 5G networks.

Metabase Q

Metabase Q

Metabase Q protects you from financial and reputational losses with more efficient and intelligent cybersecurity, using the best worldwide in technologies, processes and specialists.

Zitec

Zitec

One of Europe's largest and most prominent full-cycle software development services companies, Zitec is the digital transformation partner to companies in the EU, UK, USA, Canada and ME.

TrueBees

TrueBees

TrueBees is the first deepfakes detector able to detect AI-generated portraits shared on social media and to prevent their diffusion across the web.