AI Governance – Secure the Future by Embracing Responsible AI Practices

how to write a phishing email

AI in Simple Terms 

At its core, AI is simply software that can ‘think’, ‘learn’, and ‘make’ decisions – somewhat like we humans do. AI systems aren’t programmed in the traditional way – but instead, and to an extent, program themselves.  

Generative AI is a specific type of AI that can generate content that didn’t exist before. In the same way that a human can come up with a melody that has never been heard before. Generative AI can pull something out of nothing, just like us.  

Whereas Large Language Models (LLMs) are a specific type of generative AI that focuses on understanding and generating text. Interacting with one is a bit like having a chat with the world’s most extensive library.  

Both types of AI are increasingly deployed today, across all sectors. Unfortunately, the impressive technology is not without risk.   

The Risks Associated with AI 

Without stringent regulatory frameworks, these powerful technologies can be exploited for nefarious purposes – ranging from the perpetration of sophisticated cyber-attacks to the manipulation of public opinion and behaviour. The potential for AI and LLMs to autonomously generate disinformation at scale presents a particularly insidious threat. The worst-case scenario is … not good. The foundations of democratic discourse could be totally undermined. Societal divisions deepened and were exploited. Moreover, without ethical guidelines, these technologies may perpetuate and amplify biases present in their training data, leading to discriminatory outcomes that entrench existing social inequalities. 

Furthermore, the uncontrolled application of AI and LLMs pose significant privacy and security risks. The capability of AI to analyse and predict human behaviour with increasing accuracy raises the spectre of a surveillance state in which individual freedoms are profoundly compromised. Additionally, the lack of ethical constraints could lead to the development of autonomous weapons systems, escalating the risk of unaccountable and potentially catastrophic military engagements.  

How to Use AI Securely 

Deploying an AI management system, such as ISO 42001 is an effective way to manage the risks attendant to AI systems. While it’s impossible to remove all risk, this is a way of knowing what issues you may face and staying in control of technology that is by its very nature, amorphous.  

Understanding AI Governance

The Landscape of AI Governance 

Artificial Intelligence (AI) is increasingly ubiquitous across all sectors. Despite the soaring use of this technology, there are very few guidelines on how to use it securely.  

AI can add many benefits to your organisation’s productivity. However, deploying AI potentially raises specific considerations, namely:  

  • The use of AI for automatic decision-making, sometimes in a non-transparent, non-explainable way, might require specific management BEYOND the management of classical IT systems. In other words: “This isn’t Kansas anymore.” 
  • The use of data analysis, insight and machine learning (rather than good ol’ fashioned human-coded logic) to design systems, increases the interesting application opportunities for AI systems – but importantly it also changes the way that such systems are developed, justified and deployed. 
  • AI systems that perform continuous learning change their behaviour during use. Imagine an axe. Over time, you replace the handle. Then the blade. …Is it still the same axe if none of the original form remains? AI systems can be like that. And so, require special consideration to ensure their responsible use continues with changing behaviour.  

 Key Principles of AI Governance 

  1. Accountability  
  2. AI Expertise 
  3. Availability and Quality of Training Data  
  4. Environmental Impact 
  5. Fairness 
  6. Maintainability  
  7. Privacy 
  8. Robustness 
  9. Safety 
  10. Security 
  11. Transparency and Explainability 

Ethical AI Challenges and Solutions  

Ethical Considerations

  • Bias and Fairness 

One sad fact about AI systems is that just like much of the human data they are fuelled by, they can have unfortunate biases. We must work hard to ensure that AI systems do not emphasise, increase or further divide existing social fissures.   

  • Privacy Concerns 

Privacy is sacrosanct and we must ensure it is protected when using AI systems. ISO 42001 puts the protection of data centre stage in the new standard – and for good reason. The misuse or disclosure of personal and sensitive data (e.g. health records) can have a harmful effect on data subjects, not to mention the legal and reputational effects on any organisation caught mishandling such information.  

Artificial Intelligence Regulations  

  • Regulatory Landscape – Upcoming Regulation – EU AI Act 

A fairly hands-off approach that is not yet law but coming soon: International Collaboration and National Regulations.

Responsible AI Practices 

Let’s finish up with ten objectives your organisation should consider as you adopt AI with security, safety and impacts in mind.  

  1. Accountability

The use of AI can change existing accountability frameworks. Where previously persons would be held accountable for their actions, their actions can now be supported by or based on the use of an AI system. 

  1. AI Expertise

A selection of dedicated specialists with interdisciplinary skill sets and expertise in assessing, developing and deploying AI systems is needed.  

  1. Availability and Quality of Training Test Data 

AI Systems based on ML need training, validation and test data to train and verify the systems for the intended behaviour.  

  1. Environmental Impact

The use of AI can have a positive and negative impact on the environment.  

  1. Fairness 

The inappropriate application of AI systems for automated decision-making can be unfair to specific persons or groups of people.  

  1. Maintainability 

Maintainability is related to the ability of the organisation to handle modifications of the AI system to correct defects or adjust to new requirements.  

  1. Privacy

The misuse or disclosure of personal and sensitive data (e.g. health records) can have a harmful effect on data subjects.  

  1. Robustness

In AI, robustness means the ability (or inability) of the system to have comparable performance on new data as on the data on which it was trained (or the data of typical operations).  

  1. Safety 

Safety relates to the expectation that a system does not, under defined conditions, lead to a state in which human life, health, property or the environment is endangered.  

  1. Security 

In the context of AI and in particular regard to AI systems based on ML approaches, new security issues should be considered beyond classical information and system security concerns.  

  1. Transparency and Explainability

Transparency relates both to the characteristics of an organisation operating AI systems and to those systems themselves. Explainability relates to the explanation of important factors influencing the AI system results that are provided to interested parties in a way understandable to humans.  

The Future of AI Governance

AI Audits

At planned intervals, organisations should conduct internal audits. This will give you invaluable information on how your AI Management System is functioning effectively and being properly maintained.  

Next, plan, establish, implement and maintain an audit program – including the frequency, methods, responsibilities, planning requirements and reporting.  

When you create your internal audit program, consider the importance of the processes concerned, and the results of previous audits.  

  • Define your audit objectives, criteria and scope.  
  • When selecting auditors, ensure they will be objective and impartial.  
  • Ensure the audit results are reported to relevant managers. 

Conclusion 

The essence of the AI Management Plan is to continually improve it. It’s key that your organisation continually improves the suitability, adequacy and effectiveness of its AI Management System.  

Nonconformity & Corrective Action: When a nonconformity occurs, your organisation should:  

  • React to the non-conformity, and as applicable. 
  • Take action to control and correct it. 
  • Deal with the consequences.  
  • Implement any action needed. 
  • Review the effectiveness of any corrective action taken. 
  • Make changes to the AI Management System, if necessary. 
  • Evaluate the need for action to eliminate the cause(s) of the non-conformity so that it does not recur (or occur elsewhere) by:
  • Reviewing the non-conformity.  
  • Determining the causes of the non-conformity.
  • Determining if similar non-conformities exist or can potentially occur.

Ultimately, AI should be managed like everything else in your organisation. We are not suggesting you avoid AI – quite the opposite. Rather, you should approach it from a risk-based perspective, and with a management system around it. 

Speak to one of our experts to better understand the governance around AI and the process of ethically adopting AI.  

 

 

Risk Crew