Combat the risks associated with AI and to help more organisations take advantage of it
Chief Information Security Officer
Kyocera Document Solutions UK
The Gen AI bubble might be growing slower than it was in 2023, but as adoption continues apace, organisations across the globe are still being caught out by outdated security protocols.
It is not common knowledge how and where data is used when utilising generative AI models. Often, end users do not know the sensitivity of the data they are uploading and are more focused on the potential outcome AI technology can generate. The important approach for business leaders is to ensure they do not restrict AI use, which in turn creates shadow use, but instead educate users on how to safely use AI and provide AI models that are safe to use in the business domain.
From my experience, the challenge colleagues face here is the lack of reference material and best practices from which to build. Instead, the source of reference is best practices in data use, safety, and privacy, and adopting this approach in the use of AI. This way, the core topic of how data is utilised and generated is protected and considered by the foundation of well-established data and privacy policies.
Data privacy settings are challenging in this space, with many different web-based AI toolsets being launched daily.Our approach in this space involves utilising broader data privacy controls and data boundaries and sources to ensure data extraction is understood and controlled prior to uploading it to insecure sources.As more private AI tools and models are released, IT can control the use cases and abilities of the toolsets and expand the technology’s outcomes and outputs. This is where we believe mainstream adoption may be achieved.
Companies must have strong IT policies that guide and control how users use systems, particularly the rules they must comply with. Modern IT platforms and data loss prevention policies and controls allow IT to have a greater influence on user behaviour. Still, end-user education is always essential to ensure the best possible protection for corporate IT systems.
The critical element in trying to audit AI use and subsequent data breaches is to ensure strong guidance around permitted use cases and to utilise work groups that understand how users want to develop business operations utilising AI.Depending on the AI use case, and particularly with new private AI models, IT can have much greater control and insight.It is essential to utilise IT controls alongside industry-leading Cyber toolsets for data breaches to monitor and spot potential data leaks or breaches.
If you have been affected by the recent CrowdStrike incident and want to find out how Kyocera Cyber can better support your business, then reach out to our Cyber Team.
Many smaller businesses put themselves at risk by not having a tested disaster recovery plan in place – discover how you can better protect your business by having a Cyber Assessment.
We don’t spam, we’ll never sell your email address; find out more on our privacy page.
The Maylands Building
Maylands Avenue
Hemel Hempstead
Hertfordshire
HP2 7TG
© 2024 Annodata Limited Registered in England No. 02246366 VAT Registration No. GB766040436
Registered with the Financial Conduct Authority (FCA) – Reference number: 669037