The 10 AI commandments for legally compliant handling of ChatGPT, DALL-E & Co.

(For details - also on upcoming AI regulations - see the publications, FAQ and presentations at chatgpt-recht.de / dall-e-recht.de)

By Prof. Dr. Thomas Wilmer

1. Do not present the AI results as your own work results/achievements

Just because there is no copyright for AI results in favour of AI providers does not mean that users become authors of the AI results. If you get paid to create something personally, this can constitute fraud if you pass off AI results as your own. In audits, this is also a serious offence.

Moreover, AI results may infringe the rights of those whose content was used to create the AI training data. In this respect, it should be critically examined in particular whether such "newly" created images are not too similar to the originals by well-known artists.

 

2. Do not assume that the AI is perfect and produces error-free results.

AI results are not error-free. ChatGPT is amazingly good but it also produces flawed results. AI is only as good and correct and non-discriminatory as the underlying data and algorithms. Particularly in the case of questions on individual scientific subjects, this cannot be any different due to the lack of access to subject databases; in some cases ChatGPT then also invents source references. So if AI is used for commercially relevant purposes, one should always cross-check the results and, above all, be aware that recourse to the AI provider could be difficult due to its conditions and the legal ambiguity about the AI's "target state".

 

3. Do not feed the AI with sensitive content - the AI does not only give, it also takes!

When you send queries as prompts or upload documents, you may be revealing a lot to the AI's providers about your business, health status, political leanings, etc. For companies, if you give your know-how to the AI, it is no longer protected under trade secrets law...By the way, this also applies to translation tools and other specialised AI assistants.

 

4. Consider whether you want to give your database/website to the AI... use opt-out options.

Openly accessible content on your website can be read by the AI. Access by the AI to your website can possibly be legally prohibited by a machine-readable objection in the source code, but you can de facto prevent the readout by using bot blocks and opt-out markers (see this for example ). From this author's point of view, it is recommended to include the following wording in the meta name="robots": "The site owner declares a reservation of use in accordance with Section 44b (3) of the Copyright Act on Text and Data Mining (UrhG) and according to Art. 4 Para. 3 of the EU DSM Directive; The site owner declares a reservation of use in accordance with Section 44b (3) of the Copyright Act on Text and Data Mining (UrhG) and according to Art. 4 Para. 3 of the EU DSM Directive."

 

5. Do not use personal and trademark prompts carelessly.

You should not include any personal data or data of famous or non-famous personalities in prompts unless

  • consent has been given, or
  • it is for scientific, artistic or satirical purposes.

In particular, do not produce and circulate fake images that are difficult for third parties to recognise, as they do not know whether and how they will be used later.

When creating images, do not use trademarks and logos of companies that are then integrated into the images. While this is not always illegal, it can still be problematic, especially if the images are redistributed.

 

6. Read the terms and conditions of AI use: Who owns the input and output?

At the moment, you are often still allowed to use the AI results without paying (extra) for the use or the individual results. Keep in mind that under most terms, you pay with the input and it may also be used for further data analysis. Many AI providers will initially entice you with free AI results, and later - when you have become accustomed to using them or have geared your activity towards them - formulate new conditions. Keep this in mind when considering your possible dependency on these systems.

 

7. Never use AI results thoughtlessly via API interfaces or otherwise automated as a host.

Anyone who automatically integrates AI and publishes third-party content (in this case AI results such as texts and images) via it runs several legal risks. The content may be false, offensive, illegal or otherwise problematic, which may be punishable or lead to injunctive relief and claims for damages. Even if you yourself only host / automatically pass on the results, you may be responsible yourself according to the German Telemedia Act and other laws as well as the principles of so-called "Stoererhaftung" (Breach of Duty of Care). In addition, there are obligations to inform site visitors according to the GDPR (General Data Protection Regulation) and the TTDSG (the German Federal Telecommunications-Telemedia Data Protection Act).

For individual data protection issues, refer to:

 

8. Never use AI results carelessly in the human resources area.

If AI is used in an automated manner in the company, this must not lead to automated decisions being made - especially in the HR area - which affect employees, according to Art. 22 of the GDPR. Even outside of automated decisions, the use of AI may require the consent of employees and, if necessary, co-determination by the works council. Take this into account when planning.

 

9. AI is not generally forbidden and not generally bad: Compliance

If you follow the rules on AI use, AI can be a very useful tool in education as well as in business. The use of AI should be carefully planned, and transparency is particularly important, both in the involvement of AI and in the utilisation of the results and the integration of prompts or API interfaces.

Therefore, a process for lawful AI implementation should be created in a company/institution, which contains clear milestones on at least the following points:

  • Involvement of the works council insofar as the use of AI would in any way be suitable for controlling the performance of employees;
  • Information of the affected employees, customers, suppliers, sales partners;

  • Examining the tool to be used for this purpose;
    • who owns input and output according to its terms and conditions;

    • whether the technical structure of the tool (IP recording, connection of the software, processing of the data) complies with data protection requirements;
      • Implementation and documentation of a data protection impact assessment in accordance with Art. 35 GDPR (see the following points to be checked, among others);
      • Compliance with the requirements of Art. 5 GDPR (principles for data protection compliant processing, including data minimisation, purpose limitation);
      • Compliance with the requirements of Art. 25, 32 GDPR (technical organisational measures, data protection-friendly technology design (including privacy by design and by default);

      • Conclusion of a controller-to-controller contract (usually not a commissioned processing contract, as the AI provider also pursues its own interests when processing the collected/transferred data);
      • In the case of international data transfer (esp. USA), review of data protection-compliant use (SCCs (standard contractual clauses of the EU) plus additional safeguards (encryption, anonymisation)), possibly new data transfer agreements with the USA in the future);

    • whether the tool is "fed" with personal data and this is justified according to Art 6 GDPR or § 26 BDSG (the German Federal Data Protection Act, if still effective, observe development, see commandment 10);
    • whether issues relating to the German IT Security Act / critical infrastructure are affected;
  • Continuous monitoring of compliance by the tool provider.

 

10. Keep up to date with legal developments.

New EU regulations concerning AI are pending; in addition, specific national requirements are also under political discussion. In addition to the draft AI Regulation (the "it piece" of the new regulations), the AI Liability Regulation and other regulations such as the Digital Services and Digital Markets Act need to be considered. In addition, you should keep yourself informed about the opinions of the European and national data protection supervisory authorities, as it is not yet certain what information will be disclosed, for example, by OpenAI on data processing (e.g. in response to the enquiry by the Hessian Commissioner for Data Protection and Freedom of Information, Prof. Dr. Alexander Roßnagel, on data processing by ChatGPT) and what consequences this will have for the permissibility of using ChatGPT. Violations of data protection when using AI can result in high fines and claims for damages and can also be punishable under Art. 82 and 83 of the GDPR.

So always keep yourself up to date due to the upcoming legal innovations. Always use AI transparently and by considering the interests of all those affected. And don't put Nutella in the fridge.