AI & privacy: How to use AI safely and smart

AI & privacy

AI is hot, but when it comes to privacy, half of the business world gets stressed out. “What happens to my data when I enter something in ChatGPT? Is my entire strategy going to end up in a training set?”

Relax. Yes, there are risks. But no, it’s not a data breach with every prompt. In this blog, we lay out the facts, the myths, and especially practical solutions for you. So you can use AI smartly, safely, and with confidence. With control over your data, not with your head in the sand.

What really happens to your data?

The biggest concern with using tools like ChatGPT, Gemini, or Claude is that everything you input goes into the big AI archive. And yes, with free or Plus accounts from ChatGPT, your input is used by default to make the model smarter. But that doesn’t mean OpenAI is sifting through your entire CRM system. Your input is processed in chunks, anonymized, and only included in the training set if there’s enough volume.


If you’re using a business account like ChatGPT Team, Enterprise, or working through the API? Then your data remains yours. This has now become the norm: tools like Google Gemini and Anthropic’s Claude offer business versions with clear privacy terms.

Want to be sure? Always check the settings. Or choose AI solutions that you can host completely yourself.

Tip!
Even within the free and Pro subscription of ChatGPT, you can choose whether your input is used for training purposes. Go to Settings > Data Management > Improve the model for everyone > Turn it off.

Working with AI without personal data? It’s possible!

Want to use AI within your organization? It starts with thinking smartly about which data you really need. Often, companies think that sensitive information is indispensable, but for most applications, you can work just fine with generic or anonymized data.

Think about categorizing support questions, or an internal chatbot that searches your knowledge base. No personal details, no email addresses, no hassle.

The less data, the less risk. By choosing anonymous or summarized information from the start, you keep control over your process and your project remains agile.

What data are you actually using?

Everything starts with the question: what do you want to achieve, and what data is really necessary for that?

Many companies reflexively collect more data than needed. While you can often work excellently with anonymous or aggregated data. Do you still need to process personal data? Then you can usually anonymize it without compromising functionality.

Smart choices at the front end will save you a mountain of work and worries later on.

The role of legislation: GDPR and the AI Act

AI works with data. And once you are working with data, rules apply. These rules aren’t new: the GDPR has determined for years that you can only collect and process personal data if there’s a good reason for it. The AI Act builds on this and adds extra rules for how AI systems handle data and make decisions.

Good to know: not all data falls under the GDPR. Only information that can be directly linked to a person, such as an email address or phone number, falls under it. Are you working with business information or anonymized data? Then you’re usually in good shape.

Do you actually need personal data, for example for personalized communication? Then you need to ensure:

  • Data minimization

  • Transparency towards users

  • Secure storage and processing

  • Human oversight in important AI decisions

The AI Act categorizes AI systems into risk categories. Generative AI like ChatGPT falls under limited risk. This means you must be transparent: users need to know they are talking to AI. Are you building AI that selects applicants or makes medical diagnoses? Then stricter requirements apply, such as risk analyses, documentation, and mandatory human oversight.

Risk

Example

Action

Low risk

Spam filters, search algorithms

No additional rules

Limited risk

Chatbots and generative AI like ChatGPT

Transparency required (users must know they are talking to AI)

High risk

AI for medical diagnoses or selecting applicants.

Strict documentation and control requirements

Prohibited applications

Social scoring (like what is done in China)

Completely prohibited

In short: if you work smart and consciously, most business AI applications will simply fall under the "limited risk" category.

ChatGPT and other tools: what's the deal with your data?

We’ve touched on it briefly, but let’s recap clearly and concretely:

  • With free and Plus accounts of ChatGPT, your input is used by default to improve the model, unless you manually turn this off.

  • With business subscriptions like Team and Enterprise, your data remains private and is not used for training.

  • Also with Gemini, Claude, and other major players, business packages are emerging with strict privacy terms.

Unsure? Always check the settings and the fine print before entering sensitive information.

Maintain control: host your own data or models

Want maximum control over your data and AI? Then you can choose to host your own vector database, where you only store the data that your AI really needs. We do this in our Sterc AI applications as well.

You can even run your own language model (LLM), separate from public AI tools. It doesn’t even need to be in the cloud: it can run on your own server or within your company network. This applies not only to text models but also to speech applications like Whisper, or building AI flows with multiple assistants. Everything managed in-house, without your data leaving your organization.

Are you working with sensitive business information or want to fully customize your AI? Then this is not only safer but also a smart and future-proof choice.

Tip! Have you built your own AI database? Then link it via an API to your custom GPTs. This way, you don’t have to rebuild your knowledge base each time and can use the same foundational data across multiple applications.

Here's how to use AI safely

It comes down to conscious choices. You don’t have to stop using AI, but you do need to know what you’re doing. Are you not using personal data? Then you don’t need to worry as much. Are you working with sensitive information? Then it’s time to be sharp on tool selection, settings, and hosting options.

By making smart choices, you can deploy AI safely and effectively within your organization. We can also help you implement savvy AI solutions, fully aligned with the law and your strategy. Want to know more? Get in touch with Bas!

Using AI safely?
Let's have a coffee!