CASUS Logo
Casus Logo

CASUS Blog

ChatGPT for contracts? 10 points you must clarify before using it

Author:

Céleste Urech | Co-Founder & CTO

·

5 minute

read

Many teams are now testing ChatGPT for contract work too, often out of time pressure, often out of curiosity. And yes: for general wording, initial structure ideas, or summarising non-confidential content, this can be helpful. But as soon as confidential client or company documents come into play, it becomes tricky. Then it is not just about “does it work?”, but above all: Am I allowed to do this? Is it secure? What happens to my data?

This article is a practical checklist of what you should clarify before using it, regardless of whether you work in a law firm or in an in-house legal department.

Why these questions are so important

Contracts are rarely “just text”. They contain confidential trade secrets, negotiation positions, strategic details, personal data, and often information subject to special confidentiality obligations. If such content ends up in a tool whose data flows and storage logic you cannot clearly explain, you create a risk that is hard to contain later, both organisationally and legally.

In addition, public AI tools are not built to reflect legal accountability. A contract cannot be reviewed “approximately correctly”: even small wrong assumptions, such as an incorrect notice period, an overlooked liability exclusion, or the wrong choice of law, can have tangible consequences. That is why the benchmark for using legal AI should always be: control before convenience.

The 10 points you must clarify before using it

1) Which content is actually “confidential” and who defines that?

Before talking about tools, you need a clear internal definition: what counts as confidential for you? Is it only the client matter or the contract content itself? Or also metadata, names, amounts, counterparties, internal comments, annexes? In practice, secure AI use rarely fails because of technology, but because teams assess what is “probably fine” very differently.

2) Are data stored and if so, where and for how long?

The key question is not whether a tool “does AI”, but whether and how it stores data. Many systems log inputs at least temporarily, for example for error analysis, some retain content longer or store conversations in the account. Depending on the content, that alone can already be too much.

So clarify: is anything stored? Is there a history? Is there retention? And above all: can this be excluded contractually and technically? A good security page should answer these points transparently.

3) Where does processing take place?

Many people think of data protection only in terms of “where the data is stored”. At least as important is: where it is processed. This is especially relevant for third-country transfers, sub-processors, and cloud setups. Even if a provider says “EU”, there may be sub-service providers that make things more complicated.

A pragmatic rule: you should be able to explain in one sentence in which country data is stored, in which region it is processed, and whether data can be transferred to third countries or not. If you cannot do that, the risk for confidential contracts is usually too high.

4) Are contents used for training, or are “zero data retention” agreements possible?

This is the classic issue: “We do not use data for training” can mean many things. Some providers do not train models but still store logs. Others store nothing at all, process only in real time, and delete immediately. Others exclude training but not necessarily retention in certain cases.

What you need is clarity on two levels:

  1. Training: are user data used to improve models?

  2. Retention: are inputs or outputs stored or logged anywhere?

This distinction is particularly important for law firms, because it is not only about “training”, but also about confidentiality and traceability.

5) What about professional secrecy and legal privilege?

If you work in an environment where professional secrecy, legal privilege, or comparable protection obligations apply, you need particularly strict standards. The issue is less about AI itself and more about the lack of control over third parties: who could have access? What data flows exist? What logs are created?

A good rule of thumb is: if you cannot operate a tool in a way that content is neither accessible to third parties nor stored permanently, it is usually unsuitable for client documents. And even if it is technically possible, you still need a clear internal policy so that this does not depend on individual judgement.

6) Which technical safeguards actually exist?

“Encrypted” is easy to say. What matters is: what exactly is encrypted and when? How is access managed? Are there roles and permissions? Is there tenant separation (multi-tenant vs single-tenant)? Can an admin view content?

You do not need to be able to assess every detail yourself, but you should ask the right questions and get reliable answers. In trustworthy legal AI setups, these topics are documented transparently or can be clearly evidenced on request.

7) What happens in case of errors and how do you ensure quality?

ChatGPT can sound convincing and still be wrong. That is why you need a quality mechanism that works independently of the tool. In practice, a “requirement to cite” has proven useful: every relevant statement must refer to a specific text passage. If a tool does not support clear citation logic, verification becomes cumbersome, increasing the risk that errors slip through.

You should also clarify how uncertainty is handled: is ambiguity flagged? Are open points documented as questions? Or does a seemingly finished answer emerge that is accepted as “correct” too early?

8) Are employees even allowed to use the tool and how do you prevent shadow IT?

Many problems do not arise from one big decision, but from 20 small ones: an associate quickly uploads a document, an intern tests a prompt, someone uses a private account. This happens especially when there is no official tool or the rules are unclear.

If you want to use AI sensibly, you must enable innovation, but in a controlled way. That includes an official process: what is allowed, what is not, which tools are approved, and how this is communicated. A short internal guideline, one page, is often more effective than a 20-page policy document.

9) What does the legal and contractual basis look like?

This is where it gets formal, but important. If you want to use an AI tool professionally, you usually need a clean contractual basis, including data processing agreements, technical and organisational measures, sub-processor lists, and clear statements on data processing.

This is not bureaucracy for its own sake. It is the foundation that allows you to explain, if necessary, why the use was responsible. If your setup does not support this, it is more of an experiment than a process.

10) What is your “safe use case” to start with?

Not everything has to be “full AI” immediately. Many teams sensibly start with a limited, low-risk use case. For example: first with anonymised or non-confidential documents, then with standard contracts without particularly sensitive annexes, and only later with complex client documents.

Define a clear pilot phase for this: one contract type, one team, one timeframe, one quality standard. This reduces risk and increases the chance that AI actually becomes productive in everyday work.

What you can take away from this

If you have a clear picture after these 10 points, the decision is usually straightforward: either you use a tool only for non-critical content, or you need a solution that is built for confidential contracts. In practice, this is often the point where teams move away from “public” tools and towards controlled, closed legal AI environments.

If you want to dive deeper into storage, processing, retention, and training, it makes sense to check out our security page or contact us.

Conclusion

ChatGPT is impressive, but when it comes to confidential contracts, it is not the best wording that matters, but the best control. If you answer the 10 points above properly, you have a solid basis for a safe decision: either use it consciously in a limited way, or switch to a setup that professionally addresses confidentiality, governance, and data flows.

You can try our CASUS Legal-AI-Associate for free here.

Short FAQ

“Am I allowed to upload contracts to ChatGPT?”

This depends not only on laws, but above all on your internal policy, the protection requirements of the data, and the actual data flows of the tool. Without clarity on storage, retention, and access, it is usually not a good idea for confidential documents.

“Is anonymisation enough?”

Anonymisation can help, but in practice it is error-prone. Contract details can often be inferred indirectly, such as industry, amounts, counterparties, project names. If it is truly sensitive, a controlled setup is usually the better option.

“What is the most important point?”

If you only check one thing: retention or storage plus training. In many cases, that determines whether use is acceptable at all.

Casus Logo

Verträge auf Autopilot. Mit CASUS.

Capterra Logo
Innosuisse Logo
Venture Kick Logo
HSG Spin Off Logo

CASUS Technologies AG

Uraniastrasse 31

8001 Zurich

Switzerland

Copyright ©2025 CASUS Technologies AG — All rights reserved.

Casus Logo

Verträge auf Autopilot. Mit CASUS.

Capterra Logo
Innosuisse Logo
Venture Kick Logo
HSG Spin Off Logo

CASUS Technologies AG

Uraniastrasse 31

8001 Zurich

Switzerland

Copyright ©2025 CASUS Technologies AG — All rights reserved.

Casus Logo

Verträge auf Autopilot. Mit CASUS.

Capterra Logo
Innosuisse Logo
Venture Kick Logo
HSG Spin Off Logo

CASUS Technologies AG

Uraniastrasse 31

8001 Zurich

Switzerland

Copyright ©2025 CASUS Technologies AG — All rights reserved.