Please ensure Javascript is enabled for purposes of website accessibility

Securing Our Digital World with Responsible AI

The rise of Generative AI (GenAI) has brought forth a new era of innovation, offering businesses unprecedented opportunities to streamline operations, enhance customer experiences, and gain a competitive edge. However, this cutting-edge technology also presents cybersecurity risks that cannot be ignored. These same capabilities could be misused to create deceptive misinformation, impersonate individuals, and launch sophisticated social engineering attacks. Currently, 47% of organisations using AI lack specific cybersecurity practices to mitigate these risks. However, there is an opportunity to responsibly harness GenAI’s power to revolutionise cyber defences and threat monitoring. For business owners, it is crucial to understand both the benefits and threats of GenAI to navigate this landscape responsible and securely. In this article, we’ll delve into the cybersecurity risks of GenAI, followed by the ethical AI governance, and the business support available across Greater Manchester.

Generative AI’s Transformative Potential 
GenAI presents a wealth of opportunities for business. By embracing GenAI, businesses can streamline operations, reduce costs, and deliver exceptional customer experiences, all while freeing up valuable time and resources to focus on core business objectives. From automating content creation and marketing efforts to streamlining customer service and optimising supply chains, GenAI can drive efficiency, cost savings, and help gain a competitive edge. For example, conversational AI assistants (e.g., IBM Watson, Google’s Dialogflow, Amazon Lex) can handle routine customer inquiries, provide personalised recommendations, and offer 24/7 support, improving customer satisfaction and reducing the workload on human support teams. GenAI models (e.g., Tellius, Alteryx) can analyse vast amounts of data, including customer feedback, sales data, and market trends, to uncover valuable insights and inform data-driven decision-making processes. More recently, a remarkable achievement in the field of AI is text-to-video platform Dream Machine by Luma AI, which pushes the boundaries about what is possible in GenAI video generation. 

In the cybersecurity industry, AI’s capabilities are significant. AI can automate and streamline various cybersecurity tasks, such as vulnerability scanning, patch management, and incident response. By offloading repetitive and time-consuming tasks to AI systems, human security analysts can focus their efforts on more complex and strategic challenges, optimising resource allocation and improving overall efficiency. By leveraging AI-powered threat detection and monitoring through the analysis of vast amounts of data and identifying complex patterns, businesses can fortify cyber resilience and stay ahead of evolving threats - crucial in today's digital landscape where cyber-attacks can cripple operations and impede customer confidence. 

The Cybersecurity Risks of Generative AI 
While GenAI has revolutionised industries globally, it also introduces novel threats that malicious actors could exploit. The UK Government's report comprehensively maps both theoretical and real-world cyber-attacks across the AI lifecycle. Vulnerabilities exist in the design phase (data poisoning), development phase (model stealing), deployment phase (adversarial examples), and maintenance phase (system hijacking). Data poisoning and model extraction techniques could compromise the integrity of AI systems, while deepfakes powered by text-to-video platforms can impersonate individuals with alarming realism, enabling sophisticated social engineering attacks. 

Many businesses are either unaware of their AI exposure or lack the expertise to assess and manage AI cybersecurity threats effectively. This gap leaves organisations vulnerable to attacks that could compromise data integrity, model performance, and overall system security. Addressing this knowledge deficit through education, guidance, and capacity building is crucial for businesses to safely navigate the generative AI landscape. Businesses must approach its adoption with caution and conduct thorough due diligence to address the unique security challenges and ethical considerations associated with this technology. 

Ethical AI Governance: A Principled Approach 
AI is identified in the UK’s National Cyber Strategy as a technology vital to cyber power and is listed as a critical technology in the UK’s Science and Technology Framework. To embrace AI’s full capabilities while safeguarding businesses, embracing ethical AI governance principles is essential. The Alan Turing Institute's "AI Ethics and Governance in Practice" framework provides a comprehensive framework to integrate ethical considerations throughout the AI lifecycle such as fairness, accountability, and data stewardship. By embracing ethical AI principles and human-centric values, and investing in upskilling and capacity building, organisations can leverage the incredible potential of GenAI while safeguarding against unintended consequences and upholding fundamental human rights and societal values. This proactive approach mitigates cybersecurity risks while building customer trust and brand reputation. 

The UK government is taking steps to establish baseline AI security requirements and foster a secure-by-design approach from development to deployment. The UK's proposed AI Cyber Security Code of Practice further reinforces secure-by-design principles across the AI supply chain. The code provides guidance on implementing secure coding practices, robust testing methodologies, and security controls during the design and development phases of AI systems. Recommendations cover continuous monitoring of AI system behaviour, anomaly detection, and having incident response plans to address potential security breaches or adversarial attacks.

However, this is a global challenge requiring international collaboration. The UK actively engages partners worldwide through the Global Partnership on AI and OECD to establish ethical AI governance standards to align AI security standards with ethical principles and human rights. The goal is to develop a shared global framework that promotes the responsible development and deployment of secure AI systems while upholding fundamental ethical principles and protecting individual rights. This collaborative approach aims to ensure that as AI capabilities advance, robust security measures and ethical safeguards are in place to mitigate risks and foster trust in these transformative technologies.

Accessing Support and Resources 
The GMCA is on a mission to establish Greater Manchester as the UK and Europe’s centre for cyber and digital ethics, trust, and security. Therefore, businesses in the region have access to invaluable resources and support to navigate AI and cybersecurity complexities. GM Business Growth Hub, Digital Innovation Security Hub (DiSH), HOST Cyber Innovation Lab, and the Northwest Cyber Resilience Centre (NWCRC) offer expert guidance, training, and practical tools to implement robust cybersecurity measures and ethical AI practices. Cyber Essentials training equips employees with practical knowledge and best practices for implementing essential security measures, such as secure configuration, access control, malware protection, and patch management. This lays a solid foundation for businesses to safeguard their systems and data from cyber-attacks. By leveraging these resources and fostering awareness, organisations can embrace the transformative power of AI while ensuring digital ecosystems remain secure, ethical and aligned with human values. 
  
While the cybersecurity risks of GenAI are significant, the transformative potential of this technology cannot be ignored. By fostering awareness, providing ethical guidance, and facilitating access to expert support, organisations can safely leverage GenAI while implementing robust cybersecurity measures and upholding ethical principles. By leveraging GenAI responsibly and implementing robust security measures, businesses in Greater Manchester can fortify their cyber resilience, drive innovation, and gain a competitive edge in an increasingly digital landscape. That way, a future can be created where cutting-edge AI technologies enhance digital capabilities without compromising on fundamental values and security. 

Resources

 

Dr Natasha Moorhouse, Digital Innovation Specialist 

Tasha specialises in supporting businesses across all areas of digital innovation, including artificial intelligence, cyber security, and marketing – helping them to transform the ways in which they work.

It’s fair to say, Tasha has innovation running through her veins. As well as being Research Associate on the Greater Manchester AI Foundry, she has also held roles as XR Researcher at the Creative AR/VR Hub and Registration Chair of the International XR Conference.

She also holds a PhD in Immersive Technologies (AR and VR) from the Manchester Metropolitan University, and has published research on the application of XR Technologies in Health and Medical Care and Creative Industries.

Get in touch

Take that first step and we’ll support you with whatever you need to succeed.

*
*
*
*

Sign-up to our newsletter

Insights, news, events and opportunities straight to your inbox.

*