CASUS Logo
Casus Logo

CASUS Blog

Using Generative AI in Legal Work: A Compliance Checklist

Published on

by

Mathias Ringler CASUS

Mathias Ringler

|

Founder's Associate

Generative AI is moving steadily into the daily work of law firms and in-house legal teams - from contract analysis and case law research to document review. With that comes a concrete set of compliance questions: Which data protection rules apply? Who is liable if an AI output contains an error? And how can AI use be organised internally so it holds up under scrutiny? This checklist provides a structured overview.

What "generative AI legal compliance" means

"Generative AI legal compliance" refers to the legal, technical, and organisational requirements that law firms and companies must meet when using AI systems in legal work. It is not one single law, but the overlap of several areas: data protection, liability, professional conduct rules, and - where applicable - sector-specific regulation.

For Swiss organisations, two frameworks are particularly relevant: the revised Swiss Federal Act on Data Protection (FADP), in force since September 2023, and the EU General Data Protection Regulation (GDPR), which applies whenever personal data of EU residents is processed. The emerging requirements of the EU AI Act add another layer, classifying certain AI applications as high-risk.

Why this matters especially for law firms

Lawyers operate under specific professional obligations: client confidentiality, duty of care, and responsibility for the accuracy of legal advice. Generative AI systems produce outputs that can sound convincing while being factually wrong - a phenomenon commonly called "hallucination".

There is also the question of where client data goes when it is fed into an AI tool. Many providers store inputs for model improvement or route them to data centres outside Switzerland and the EU. Anyone entering confidential contract data or personal information needs to know what happens to it.

Liability is a complex issue. If a flawed AI output is passed on to a client without review, it can trigger liability for professional negligence, misrepresentation, or breach of duty. Who bears that liability depends on several factors: whether the tool's terms of use were followed, whether the tool was still in a development phase, and whether the error originated in the system or in how it was applied.

Compliance checklist: generative AI in legal practice

The following checklist is aimed at law firms and in-house legal teams that already use AI tools or are planning to introduce them.

Data protection and data residency

  • Is personal data or confidential client information being entered into the AI system? If so, the applicable legal basis under FADP/GDPR must be established.

  • Where are the provider's servers located? Data transfers to the US are subject to heightened requirements under current Swiss and EU law.

  • Does the provider store inputs permanently or use them for model training? Zero-data-retention policies must be checked explicitly in the provider's documentation.

  • Is there a data processing agreement (DPA) with the AI provider?

Outputs and verification

  • Are AI-generated outputs - legal assessments, clause suggestions, research results - reviewed by a qualified person before further use?

  • Is there an internal process to prevent hallucinated citations or inaccurate statements from reaching pleadings or client advice unreviewed?

  • Are staff aware of the limitations of generative AI outputs?

Internal policies and governance

  • Is there a written AI usage policy that specifies which tools may be used for which tasks?

  • Is it clear who within the organisation can approve AI tools for use?

  • Are AI usage logs retained for audit purposes?

Copyright and intellectual property

  • Has the question of copyright ownership in AI-generated outputs been clarified with the provider? In many jurisdictions - including under a US ruling from March 2025 (Thaler v. Perlmutter, DC Circuit) - AI-generated material without a human creative contribution cannot be copyrighted.

  • Is there a risk that the AI system reproduces third-party copyrighted material in its outputs?

Professional conduct requirements

  • Are the AI tools being used compatible with professional duties under the applicable rules of professional conduct?

  • Are clients informed when AI tools are used in their matter? A disclosure obligation may apply in certain situations.

How a specialised legal AI platform changes the picture

Generic AI tools - general-purpose language models - are not built for the specific compliance requirements of legal work. CASUS, a Swiss legal AI platform, is built for Swiss law firms and in-house legal teams, with concrete features that map directly onto the checklist above.

The platform is hosted in Switzerland and the EU, with no data transfer to the US. There is no permanent data retention (Zero Data Retention) and no human access to submitted documents (No Human Review, Abuse Monitor opt-out). These are not marketing claims but technical and contractual foundations, documented in the security and data residency documentation.

For the verification step - a core element of any AI compliance strategy - CASUS produces structured, source-based outputs. The Legal Research mode draws on over 660,000 cantonal and federal court decisions, as well as statutory law. Results are linked to specific sources and shown as inline previews within the chat interface, without needing to click through to external documents. That makes verification by the responsible lawyer substantially more efficient than when a general language model delivers a citation without any traceable source.

The AI Data Room - the module for parallel analysis of large document sets - allows targeted extraction of data protection-relevant fields: personal names, email addresses, IDs, and sensitive data categories such as health or banking data. This supports compliance reviews and anonymisation workflows without manual document-by-document review.

Gaps in the regulatory framework

Even though the FADP and GDPR already apply, regulatory gaps remain. The EU AI Act is entering into force gradually and will impose high-risk requirements on certain AI applications in the legal sector - covering transparency, human oversight, and risk assessment. Swiss organisations that provide services to EU clients will fall within its scope.

Internationally, there is ongoing debate about who owns AI-generated content and whether existing liability rules are adequate for autonomous systems. These questions are not yet resolved. According to the TrustArc Global Privacy Benchmarks Report 2025, 53 percent of organisations still rely on manual processes to manage privacy activities, and 62 percent of those teams report being behind schedule on regulatory requirements.

That figure points to something straightforward: compliance is not a one-time project, especially when regulation is moving as fast as it currently is.

Getting started in practice

For those who want to use AI tools in legal work without unnecessary compliance exposure, a practical starting point looks like this:

  1. Take stock of tools already in use - including informal use by staff.

  2. Clarify the data protection basis: legal ground, data flows, provider documentation.

  3. Adopt a written AI usage policy.

  4. Train staff on the limitations of generative AI outputs.

  5. Evaluate specialised legal AI tools built from the ground up for legal requirements and data protection compliance.

CASUS is a starting point for that last step. The AI Chat module answers document questions with linked source passages, the Benchmark workflow checks contracts against standards like NDA, SPA, or DPA, and Risk & Quality Review identifies risks and proposes redrafting options directly in Word. The platform can be tested at app.getcasus.com/signup.

FAQ

What does "generative AI legal compliance" mean?

Generative AI legal compliance refers to the full set of legal, technical, and organisational requirements that apply when using generative AI systems in legal work. This includes data protection (FADP/GDPR), liability, professional conduct obligations, and - increasingly - requirements from the EU AI Act.

Can a law firm enter client data into an AI tool?

It depends on the tool and the legal basis. The key questions are: Where is the data processed? Does the provider store inputs? Is there a DPA in place? For GDPR- and FADP-compliant use, firms should choose tools that host data in Switzerland or the EU, apply zero data retention, and have no human access to the inputs.

Who is liable if an AI output contains an error that feeds into client advice?

Liability typically sits with the user, not the AI provider - similar to relying on a flawed research result. Law firms are responsible for reviewing AI outputs before passing them to clients or incorporating them into legal documents.

What is the EU AI Act and does it apply to Swiss law firms?

The EU AI Act is an EU regulation that classifies AI systems by risk level. Swiss law firms that advise EU clients or use AI tools for EU-related matters may fall within its scope. Certain legal applications will likely be classified as high-risk, bringing heightened transparency and human oversight requirements.

Can AI-generated contract language be protected by copyright?

In many jurisdictions, not without a substantial human creative contribution. A US Court of Appeals (DC Circuit) held in Thaler v. Perlmutter in March 2025 that copyright requires human authorship. The position under Swiss law is not yet definitively settled.

How do I know if an AI tool is appropriate for legal use?

Key criteria are: data residency (Switzerland/EU), zero data retention, no human review, a clear DPA, source-based rather than hallucinated outputs, and a traceable audit trail. Tools built specifically for the legal sector are more likely to meet these criteria than general-purpose language models.

Does the use of AI need to be disclosed to clients?

This depends on the specific situation and the applicable professional conduct rules. There is currently no general statutory disclosure requirement in Switzerland. It is advisable to maintain internal policies and - where in doubt - to be transparent with clients, particularly when AI is used in the substantive handling of their matter.

What is the difference between zero data retention and no human review?

Zero data retention means the provider does not store submitted data after processing. No human review means that provider staff have no access to the processed content. Both are independent security measures that are jointly relevant in a legal context.

Casus Logo

Verträge auf Autopilot. Mit CASUS.

Capterra Logo
Innosuisse Logo
Venture Kick Logo
HSG Spin Off Logo

CASUS Technologies AG

Uraniastrasse 31

8001 Zurich

Switzerland

Copyright ©2025 CASUS Technologies AG — All rights reserved.

Linkedin Icon
Youtube Icon
Casus Logo

Verträge auf Autopilot. Mit CASUS.

Capterra Logo
Innosuisse Logo
Venture Kick Logo
HSG Spin Off Logo

CASUS Technologies AG

Uraniastrasse 31

8001 Zurich

Switzerland

Copyright ©2025 CASUS Technologies AG — All rights reserved.

Linkedin Icon
Youtube Icon
Casus Logo

Verträge auf Autopilot. Mit CASUS.

Capterra Logo
Innosuisse Logo
Venture Kick Logo
HSG Spin Off Logo

CASUS Technologies AG

Uraniastrasse 31

8001 Zurich

Switzerland

Copyright ©2025 CASUS Technologies AG — All rights reserved.

Linkedin Icon
Youtube Icon