How to keep your data access secure with generative AI

Chat GPT and other public generative AI tools are being widely adopted within organisations but are your teams aware of the security issues that can occur when creating documents using the technology? 

The hidden security dangers of using AI when file sharing 

Say for example, a team member asks ChatGPT to help with a report for the sales team. They copy-paste into ChatGPT a bunch of information from various sensitive internal documents and sources, to create the perfect prompt for ChatGPT to really accelerate their work. 

But ChatGPT is a publicly available service, and by default it saves users’ prompt histories. The real danger occurs when cybercriminals successfully hack into the ChatGPT account of the user. There they will find a full prompt history and be able to access all of the data used to create the document. 

How Torsion helps teams stay secure 

Torsion can apply specific security rules to the sharing of data within an organisation. Once ‘classified’ Torsion will flag any sharing of sensitive documents or data with the wrong person or team. So even if a document is created with the aid of AI, Torsion automatically ensures it will only be shared with the people who should have access to that data.  

And because Torsion sits within existing Microsoft 365 tenants as an additional tab called ‘Sharing & Security’, we are able to place clear warnings and reminders on the dangers of sharing data created using AI. Data classified as ‘sensitive’ will also prompt warnings to users not to be used or posted to public generative AI platforms. 

Watch our short on demand demo here.