top of page

Membership is FREE so join today to receive your welcome pack and access to all of our cyber security advice and resources.

Want to improve your cyber resilience?

Think Before You Chat: How AI Tools Could Be Leaking Your Company Data

  • janna7555
  • 8 minutes ago
  • 3 min read
AI tools used on smartphone

AI tools are becoming a normal part of everyday work, whether that’s using ChatGPT to draft an email, asking Claude to summarise a document, or testing an AI assistant to help with admin.


They can be incredibly useful. But they can also be a hidden risk, especially if you or your team are using them without the right safeguards in place.


Recent research found that sensitive company data, including financial information, internal reports, and client details, is turning up in public search results, all because it’s been shared with AI tools that weren’t meant to handle that kind of information.


At the South West Cyber Resilience Centre, we want to help you understand what’s happening, and how to keep your business or charity’s data safe.


The Problem: AI Tools Remember Everything


When you share something with an AI tool, it doesn’t just disappear. Many AI platforms “learn” from what users type in, and that data can sometimes be stored or used to improve their systems.


Researchers recently showed that you could find private business documents online just by searching certain phrases. These included things like internal salary reports, legal filings, and confidential memos. All of this data was uploaded, often unintentionally, by people using AI chatbots for work.


The shocking part? Over 40% of people surveyed admitted to sharing sensitive information with AI tools, and most said their employer didn’t even know about it.


Why It’s Happening


A lot of people are turning to AI because it’s quick and easy. But that convenience can create a blind spot when it comes to privacy. Here’s what’s going wrong:


  • Unapproved tools: Employees use public AI sites to get work done faster, without checking if it’s allowed or safe.

  • No oversight: Organisations often haven’t set clear rules or policies about which tools can be used.

  • Hidden vulnerabilities: Many popular AI apps have already had security flaws that let hackers extract information.

  • Public sharing: Some AI tools publish “shared” chats to the web by default, meaning private work conversations could end up being searchable online.

  • Even though most people know AI tools carry risks, the temptation to use them can outweigh caution, especially when deadlines are tight.


The Risks for South West Organisations


For small and medium organisations (SMOs) across the South West — including charities and not-for-profits, these risks aren’t just theoretical.


If someone in your team uploads a report, client list, or internal plan to an AI chatbot, that information could:


  • End up publicly searchable online.

  • Be used to train future AI systems, meaning it’s stored indefinitely.

  • Cause legal or compliance issues if it includes personal or financial data.

  • Lead to data breaches that damage trust with clients, customers, or funders.

  • And beyond privacy concerns, AI tools can also make costly mistakes, creating fake data or “hallucinating” facts that could harm your reputation if you rely on them blindly.


How to Protect Your Organisation


Here are some simple, practical steps you can take to stay safe while still getting the benefits of AI:


1. Create an AI Policy

Decide which tools are approved and make sure everyone knows the rules. If a tool hasn’t been tested or approved, staff shouldn’t use it for work tasks.

2. Set Clear Boundaries

Never share sensitive or confidential information, like client data, financial details, or anything marked internal, with AI systems connected to the public web.

3. Train Your Team

Make sure everyone understands the risks and knows what’s safe to share. Even basic awareness training can prevent accidental data leaks.

4. Choose Trusted Tools

If you want to use AI within your organisation, look for options that meet strong data protection standards, consider using tools built into existing secure platforms, like Microsoft Copilot in 365.

5. Review Regularly

AI technology is evolving fast. Review your policies every few months to make sure they still make sense, and stay informed about new risks.


Final Thoughts


AI can be a brilliant way to save time and work smarter, but only when used safely.

Before you paste that next paragraph into a chatbot, ask yourself:


“Would I be comfortable if this ended up on Google?”

If the answer is no, keep it private.


The good news is that staying safe doesn’t have to be complicated. A few small changes, a policy, some training, and the right tools, can go a long way to protecting your organisation’s data and reputation.


Need help getting started?


The South West Cyber Resilience Centre offers free guidance and affordable support for businesses and charities of all sizes.




 
 
bottom of page