Table of Contents
Disclaimer: This article does not constitute legal advice. For your specific situation, we recommend consulting a lawyer specializing in data protection law.
ChatGPT has become main stream in many organizations. Writing texts, summarizing content, developing ideas, structuring meeting notes – the use cases are countless. And that is precisely the problem: the deeper AI tools like ChatGPT become embedded in work processes, the more personal data flows into them. Often without anyone stopping to ask: where does my data go, what is it used for and is this GDPR-compliant?
The short answer: it depends. On the version, the configuration, the intended use – and on exactly what you or your employees are entering into the tool. The long answer is what this article is for.
tl;dr: The Most Important Points at a Glance
- ChatGPT in the free version is generally not GDPR-compliant for business use involving personal data.
- ChatGPT Enterprise and the API offer better data protection guarantees – but are not a free pass.
- OpenAI is a US company and is subject to the CLOUD Act – with all the consequences that entails for European organizations.
- No DPA, no use: Without a Data Processing Agreement, business use involving personal data is simply not permissible.
- Meeting content requires particular caution: Anyone entering transcripts or conversation content into ChatGPT is often processing highly sensitive data – without an adequate legal basis.
Why GDPR-Compliance Matters So Much
ChatGPT has one characteristic that makes it particularly risky from a data protection perspective: it is so easy to use that deployment often begins without any review at all. An employee pastes in a meeting protocol and asks ChatGPT to summarize it. Another copies a customer inquiry to help draft a response. A third asks it to structure notes from an HR meeting.
In all three cases, personal data is being transmitted to a US-based provider – often without the consent of the individuals concerned, without a Data Processing Agreement, and without any assessment of whether a legal basis exists.
This is not a theoretical risk. The Italian data protection authority, the Garante, temporarily banned ChatGPT as early as 2023. The French CNIL launched investigations. And the Hamburg Commissioner for Data Protection publicly addressed the issue. The topic is firmly on the radar of supervisory authorities across Europe.
What ChatGPT Actually Is – and Which Versions Exist
To answer the GDPR question meaningfully, you need to distinguish between different modes of use:
ChatGPT Free and Plus
The free and Plus versions of ChatGPT are consumer products. OpenAI uses conversation data here by default to improve its models – meaning for training. For business use involving personal data, these versions are generally not suitable: there is no DPA, no GDPR-compliant data processing, and no guaranteed data separation.
ChatGPT Enterprise
The Enterprise version is aimed at businesses and offers, among other things: no use of customer data for model training, a Data Processing Agreement (DPA), SSO integration, and enhanced admin controls. This is significantly better – but not without limitations, which we will come to shortly.
OpenAI API
Using the OpenAI API also provides better data protection guarantees: no training on API data by default, a DPA available, and greater control over data processing. For businesses integrating ChatGPT into their own applications, the API is the more robust choice from a data protection perspective.
Microsoft Copilot (Azure OpenAI)
Organizations using OpenAI models via Microsoft's Azure infrastructure benefit from Microsoft's enterprise data protection framework, including an EU Data Boundary option. This is a different legal configuration from direct OpenAI access – with its own advantages and limitations.
The GDPR Assessment: Where the Problems Lie
Legal basis
Every processing of personal data requires a legal basis under Art. 6 GDPR. When you enter employee data, customer data, or other personal information into ChatGPT, you must be able to justify why this is legally permissible.
Legitimate interest (Art. 6 para. 1f) is conceivable – but only where the processing is genuinely necessary, the individuals concerned have been informed, and their interests do not override yours. In practice, this is difficult to demonstrate for many ChatGPT use cases.
Data processing and DPA
When you use ChatGPT in a business context and process personal data in doing so, OpenAI acts as a data processor – and you need a DPA under Art. 28 GDPR. No such agreement exists for ChatGPT Free or Plus. For Enterprise and the API, one is available – but you need to actively conclude it and carefully review what it actually covers.
Without a DPA, use involving personal data is simply not permissible. This applies to every SaaS application – ChatGPT is no exception.
Data transfer to the US (CLOUD Act and more)
OpenAI is a US company. All data you send to ChatGPT is processed on US infrastructure – even where OpenAI uses European data centers for certain products, as long as legal control remains with a US entity.
This means: the CLOUD Act, FISA Section 702, and all other US access rights potentially apply to your ChatGPT data too. The EU-US Data Privacy Framework (DPF) mitigates this risk – but as explained in Article 3 of this series, it is not a stable foundation. An appeal against the DPF is still pending, and Schrems III cannot be ruled out.
OpenAI relies on Standard Contractual Clauses (SCCs) for EU transfers – with the well-known limitations against US government access.
Here you can read more about German vs. US Servers.
Transparency towards data subjects
When you enter personal data belonging to third parties – customers, partners, employees – into ChatGPT, those individuals must be informed. Arts. 13 and 14 GDPR require that data subjects know who is processing their data, for what purpose, and for how long.
In practice, this information is almost always absent. Anyone asking ChatGPT to summarize a customer conversation without having informed that customer is in breach of GDPR transparency obligations.
Purpose limitation and data minimization
ChatGPT in the free version trains models based on user input. Even if you disable this in the settings, the question remains: for what purpose was the personal data originally collected – and is passing it to ChatGPT compatible with that purpose?
An example: a customer shares their contact details to request a quote. They did not consent to their data being used to improve a US AI model. Passing it to ChatGPT would be incompatible with the original purpose.
What Supervisory Authorities Are Saying
European data protection authorities have increasingly engaged with ChatGPT and generative AI tools over recent years:
Italy (Garante) In March 2023, the Garante temporarily banned ChatGPT and compelled OpenAI to make improvements – including better transparency, consent mechanisms, and age verification. This was the first formal intervention by an EU authority against OpenAI and sent a signal felt across the continent.
France (CNIL) The CNIL launched multiple investigations into ChatGPT and has publicly addressed the requirements that apply to AI systems under the GDPR. France is among the most active data protection authorities in Europe when it comes to generative AI.
Germany (DSK and state authorities) The German Data Protection Conference (DSK) has set out clear requirements for the use of AI tools in businesses in its AI guidance: data protection impact assessments, internal policies, works council involvement, and clear purpose limitation. These requirements apply to ChatGPT just as they do to any other AI tool.
EDPB (European Data Protection Board) The EDPB established a ChatGPT task force and developed guidelines to enable coordinated supervision of OpenAI across Europe. The result: a Europe-wide coordinated approach that keeps OpenAI under sustained regulatory scrutiny.
The EU AI Act: A New Dimension
The EU AI Act has been in force since August 2024 – with staggered application dates through to 2027. For organizations using AI tools like ChatGPT, this creates new obligations:
Generative AI models like GPT-4 are classified as General Purpose AI (GPAI) and are subject to specific transparency and documentation requirements. Providers must supply technical documentation, demonstrate copyright compliance, and publish summaries of training data.
For organizations as users, this means: internal use of GPAI systems should be documented – particularly in high-risk contexts such as HR, recruiting, or the processing of sensitive data. In this context, the DSK explicitly recommends a data protection impact assessment before introducing such systems.
The EU AI Act and the GDPR do not operate in isolation but complement each other: organizations that take GDPR requirements seriously are well prepared for the AI Act – and vice versa.
Special GDPR Caution: ChatGPT and Meeting Content
One use case deserves particular attention: using ChatGPT to process meeting content. It sounds practical – paste in a meeting transcript, generate a summary. But this is precisely where data protection risks accumulate:
Meeting transcripts almost always contain personal data. Names, roles, opinions, assessments, sometimes salary information, health details, or sensitive strategic information. All of this goes to OpenAI – without the meeting participants knowing or having consented.
There is no DPA for ad hoc use. Anyone using ChatGPT Free or Plus and entering meeting content has no legal basis for this processing relationship.
Purpose limitation is violated. Meeting participants did not consent to their spoken words being used to improve a US AI model.
There is no deletion guarantee. When and whether OpenAI deletes entered data is – depending on the product version and configuration – not transparent enough to support robust GDPR compliance.
This is one of the key reasons why specialized AI meeting assistants like Sally.io exist: they solve the same problem – structuring, summarizing, and making meeting content useful – but with a data protection architecture built for exactly this purpose from the ground up.
What You Can Do: Graduated Recommendations
6 steps to use ChatGPT in your organization
Step 1: Create a usage policy Define internally which data may be entered into ChatGPT – and which may not. Personal data belonging to customers, partners, or employees should not be entered without a clear legal basis.
Step 2: Choose the right version For business use involving personal data, only ChatGPT Enterprise or the API are appropriate – with a concluded DPA and training on customer data disabled.
Step 3: Conclude a DPA Ensure a valid Data Processing Agreement is in place before any personal data is processed.
Step 4: Assess transfer risk Document the legal basis on which transfer to the US takes place – DPF, SCCs, or other mechanisms – and which supplementary measures you have implemented.
Step 5: Involve the works council If ChatGPT is integrated into work processes that touch on employee data or could influence behavior and performance, the works council has a co-determination right (§ 87 para. 1 no. 6 BetrVG).
Step 6: Review DPIA requirement For systematic use of ChatGPT – particularly in HR, legal, or other sensitive areas – a Data Protection Impact Assessment under Art. 35 GDPR is typically required.
If you want to process meeting content with AI
Here the recommendation is clear: don't use a general-purpose tool like ChatGPT for this. Use Sally AI, a specialized AI meeting assistant with a GDPR-compliant architecture, European data storage, and a DPA that specifically covers this processing purpose.
ChatGPT Enterprise vs. EU-Based Alternatives: An Honest Comparison
FAQ – Frequently Asked Questions
Are we allowed to use ChatGPT at work at all? Yes – but with clear limits. For use without personal data (e.g. general text drafting or brainstorming without any customer reference), ChatGPT is unproblematic in many cases. As soon as personal data is involved, you need a legal basis, a DPA, and a transparent usage policy.
Is ChatGPT Enterprise GDPR-compliant? It is the most robust version of ChatGPT from a data protection perspective – but it is not a complete solution. You have a DPA, no training on your data, and more control. But OpenAI remains a US company with CLOUD Act exposure, and the transfer to the US remains a risk that needs to be documented.
What happens if employees use ChatGPT without the company's knowledge – so-called shadow IT? This is a real and common problem. If employees uncontrollably enter personal data into ChatGPT, your organization remains liable as the data controller. A clear usage policy, technical access restrictions, and training are the most important countermeasures.
Does OpenAI now have its headquarters in the EU? No. OpenAI has European offices – including in Dublin – but the company's registered office and legal control remain in the United States. CLOUD Act exposure exists regardless of where OpenAI maintains offices.
Is it enough to disable training in the ChatGPT settings? It is an important step – but not a sufficient one. Disabling training addresses only one of several data protection aspects. Legal basis, DPA, transparency towards data subjects, and transfer risk are all unaffected by this setting.
Do we need a DPIA for using ChatGPT? Very likely yes, as soon as you systematically integrate ChatGPT into work processes that involve personal data. Particularly in HR, legal, customer service, or similar areas, the conditions for a mandatory DPIA under Art. 35 GDPR are typically met.
Conclusion: Not Forbidden – But Not Simple
Using ChatGPT in a GDPR-compliant way is possible – but it requires effort, the right product version, clear internal rules, and an honest engagement with the transfer risks involved. Anyone who skips this work and simply starts using it is building on a legally unstable foundation.
For general AI assistance tasks without any personal data involved, ChatGPT is a powerful tool. For processing meeting content, employee data, or customer information – precisely the contexts where AI promises the greatest value in everyday business – specialized, GDPR-compliant alternatives are the safer choice, and in practice often the more practical one too.
PS: Sally AI - GDPR-Compliant Meeting Automation
ChatGPT can do many things. But it was not built for the GDPR-compliant processing of meeting content in European organizations. Sally AI was.
- No US corporate background – no CLOUD Act exposure
- Servers in Germany – all data stays in the EU
- DPA included, full sub-processor transparency
- No training on customer data – contractually guaranteed
- Automatic consent notification for meeting participants
- Works council support and works agreement assistance
- SOC 2 in preparation | ISO 27001 in preparation | BSI alignment
You want the benefits of AI-powered meeting documentation – without the data protection risks of a US general-purpose tool?
Disclaimer: This article does not constitute legal advice. For your specific situation, we recommend consulting a lawyer specializing in data protection law.


Try meeting transcription now!
Experience how effortless meeting notes can be – try Sally free for 4 weeks. No credit card required.
Test NowOr: Arrange a Demo Appointment


