ChatGPT

ChatGPT & Data Protection: What Are The Consequences Of Using The Chatbot?

ChatGPT is igniting curiosity about artificial intelligence worldwide. Around 500 million people interact with the chatbot every week. But with the hype, so too are concerns among data protection advocates. How secure is ChatGPT in terms of data protection, and should companies forgo the AI ​​tool?The supervisory authorities in the world are currently investigating data protection issues related to ChatGPT. The decision was triggered by a decision by the Italian data protection authority (GPDP). The data protection authority found that there is no legal basis for collecting user data from “conversations” with the chatbot.

Furthermore, users are not adequately informed about the processing of their data, and adequate protection is not ensured when processing data from minors. Furthermore, a data breach allegedly occurred involving data from “conversations” and payment information. The authority subsequently banned the use of the chatbot in Italy.

We summarize the status quo in the world and the most important facts about ChatGPT and data protection.

What is ChatGPT, and who owns it?

ChatGPT stands for Chatbot Generative Pre-trained Transformer and is a speech-based application. This means users can communicate with ChatGPT via text input and have it generate answers to their questions.

How ChatGPT works

The chatbot is based on deep learning, a branch of machine learning. Put simply, ChatGPT is an artificial neural network that functions similarly to the human brain and can learn to understand text and make certain decisions.

With each conversation, the chatbot learns and is thus able to give human-like answers.

ChatGPT is the chatbot of the former research company OpenAI LP, which is now a for-profit company controlled by the non-profit organization OpenAI Inc.

The company was funded by Microsoft and Elon Musk, among others. Musk has since distanced himself from OpenAI. Supporters include Amazon Web Services, Infosys Technologies, and the co-founders of LinkedIn and PayPal.

ChatGPT & Privacy: What’s the problem?

To continually improve, ChatGPT accesses millions of texts and information freely available on the internet. The AI ​​also uses user input from both private individuals and companies for training.

It cannot be ruled out that personal and sensitive information may also be processed, as was the case in Italy, where a data protection breach in 2023 led to the disclosure of personal information in other people’s chats.

Flashback: Why did Italy ban ChatGPT?

After the data breach in Italy was uncovered, the country’s data protection authority accused the provider OpenAI, among other things, of violating the data protection principles of purpose limitation and data minimization, processing data without a legal basis, and failing to adequately safeguard the rights of data subjects.

In order to lift the ban on data processing via ChatGPT in Italy, the supervisory authority required OpenAI to take a number of measures. The provider had to:

  • publish a privacy information that describes, among other things, the logic of the data processing required for the operation of ChatGPT
  • introduce age control
  • create a clear legal basis, whereby, from the point of view of the Italian authorities, only the consent of the users or a legitimate interest can be considered
  • Take precautions to enable the exercise of data subjects’ rights, such as the right to erasure or access to data, even for non-users
  • Inform Italian citizens about data processing for AI training purposes through an information campaign via radio, television, newspapers, and the Internet

On April 28, 2023, ChatGPT was allowed to be used again in Italy. According to the local regulator, OpenAI restored the service in Italy with improved transparency and enhanced rights, in accordance with data protection requirements.

How do data protection authorities assess ChatGPT?

Some data protection advocates in the world shared the concerns of the Italian supervisory authority at the time and called for similar measures. The responsible state data protection authorities require extensive information for their review.

However, due to its complexity, assessing artificial intelligence from a data protection perspective was challenging. For example, there was no data protection assessment of ChatGPT.

  • Information about the data sources
  • Information about the algorithms behind automated data processing and
  • Clarity about whether data will be shared with third parties with commercial interests.

In 2023, German data protection authorities examined ChatGPT and data protection in detail. In its 2025 activity report, the Lower Saxony State Commissioner for Data Protection provided a data protection assessment of ChatGPT and concluded, among other things, that the rights of data subjects can only be implemented to a limited extent.

The most important facts about ChatGPT and data protection

The General Data Protection Regulation (GDPR) served as the basis for the data protection review. When personal data is processed in ChatGPT, various data protection challenges arise. The following overview summarizes the most important ones.

Consent to data processing

The GDPR permits the processing of data if there is a legal basis pursuant to Art. 6 (1) GDPR. For example, the data processing must be necessary to fulfill contractual obligations or based on a legitimate interest of the operator. The interests of data subjects must not be violated.

Without another legal basis under Art. 6 (1) GDPR, processing may only take place with the consent of the data subject. This requires transparent information to the data subject about the data processing and its consequences.

Because an AI like ChatGPT is considered a black box in terms of data protection, companies generally cannot provide detailed information about data processing. Effective consent is therefore virtually impossible.

Transparency about data processing

The GDPR stipulates that a data subject’s data must be processed in a manner that is transparent to them. This requirement is also contradicted by the lack of transparency regarding the functioning of the AI ​​model behind ChatGPT.

Nevertheless, companies must comply with their information obligations under Articles 13 and 14 GDPR. According to Article 13 GDPR, they must inform the data subject in a comprehensible and easily accessible manner about the data processing and the functionality of the AI ​​used. This includes information about the scope of data processing, the legal basis, and the recipients.

Protection of the rights of data subjects

Data subjects must be informed of their data protection rights. According to the GDPR, companies using ChatGPT must also comply with the rights of data subjects and be able to delete processed data, for example, upon request. However, this can be problematic if it is unclear how the processing takes place and how personal data is stored. Typically, at least prompts and outputs are stored for some time, so the responsible companies must provide information about them and, if necessary, delete them.

OpenAI’s role in data protection

Commercial ChatGPT users can integrate the chatbot into their internal processes. They are therefore responsible for data processing under data protection law. Since OpenAI has access to the company’s data in this configuration, the data processing agreement (DPA) offered by OpenAI should be concluded. This is the only way a company can ensure that data is used only on their behalf and not, for example, for training the AI ​​model.

Data transfer to a third country

The chatbot is offered by the US company OpenAI. However, contracts between OpenAI and EU customers are often concluded with the Irish company, OpenAI Ireland Ltd., so no direct data transfer to the US company takes place. However, this company is indirectly listed, along with other affiliated companies, as a subcontractor of the Irish company. Therefore, it is still necessary to check whether OpenAI has appropriate data protection guarantees and contracts in place to ensure an adequate level of protection for data transferred to OpenAI-affiliated companies or other subprocessors.

How can companies use ChatGPT as securely as possible?

ChatGPT offers many benefits for users, but the chatbot can also cause harm. Criminals can, for example, use the tool to enhance their phishing attacks. Furthermore, there is considerable uncertainty regarding ChatGPT’s data protection. As long as the algorithm behind the chatbot remains a black box, private users and companies should use the application with extreme caution. There is currently no secure way to use ChatGPT and remain GDPR compliant. However, users of the AI ​​can take precautions to reduce the risk of a data breach as much as possible.

Stay up-to-date with newsletters and downloads

Companies considering using ChatGPT or already taking their first steps with the tool should actively stay informed about further developments and regulatory review. Our Proliance newsletter regularly informs you about the discussion surrounding ChatGPT data protection.

We also offer guides and checklists for background knowledge on artificial intelligence and the safest possible use of AI tools such as ChatGPT.

Outlook for ChatGPT in Europe

There is still a lot of uncertainty regarding data protection with ChatGPT. Schools, universities, and private individuals continue to use the chatbot diligently, however. This also applies to companies. The European Union’s AI Regulation now provides binding guidelines for the use of ChatGPT in companies.

The AI ​​Act harmonizes the regulations for AI systems across EU member states. The directive classifies AI applications into four risk categories. The higher the risk level, the stricter the regulations.

Despite the AI ​​Regulation, it remains the responsibility of companies to comply with data protection regulations when using AI tools such as ChatGPT. It is important to train employees in the use of artificial intelligence and establish guidelines for AI use. Furthermore, companies should regularly review, together with their data protection officer, whether their data protection measures are up to date and comply with current legal requirements.

Comments are closed.