CASUS Logo white
Casus Logo

CASUS Blog

EU AI Act and Legal Tech: What Changes for Swiss Law Firms in 2026

Last updated on

by

Fabian Staub

Fabian Staub

|

Co-Founder & CEO

The EU AI Act has been in force since August 2024. Treating its impact on legal tech as an internal European matter is a mistake. The regulation applies extraterritorially - much like the GDPR - and affects any organisation that uses AI-powered services for EU clients or offers such systems to them. For Swiss law firms and in-house legal teams handling international mandates or advising EU clients, the EU AI Act's impact on legal tech is directly relevant.

What the EU AI Act regulates - and who it covers

Regulation (EU) 2024/1689 classifies AI systems into four risk categories: minimal, limited, high, and unacceptable. Each category carries different obligations. Systems with unacceptable risk - such as AI used for social scoring of individuals - are prohibited outright. High-risk systems are subject to strict requirements on transparency, human oversight, and data quality.

For legal tech applications, the classification is often not straightforward. Predictive coding tools, automated contract analysis, or AI-based risk assessments can fall into the high-risk category if they prepare or influence legally relevant decisions. What matters is the actual use case, not the technical architecture.

The extraterritorial reach applies where AI systems are used in the EU or where their outputs have legal effects within the EU - regardless of where the provider is located. A Swiss law firm using a US-based legal AI tool for EU mandates is operating in a regulated space.

The four risk categories at a glance

Minimal risk covers things like spam filters or document formatting. No specific obligations apply.

Limited risk applies to systems that interact with people without this being apparent - chatbots, for example. The main obligation here is a transparency disclosure requirement.

High risk is the most relevant category for legal tech. It covers AI systems used in law enforcement, human resources, and the administration of justice. Requirements include: technical documentation, risk management, data quality checks, human oversight, and a conformity assessment before market release.

Unacceptable risk covers systems that endanger fundamental rights. These are simply prohibited.

Which legal tech applications are considered high-risk

The boundaries are still being defined, and the competent EU authorities have not yet published a final classification list for legal tech. The wording of the regulation and the accompanying guidelines do suggest, however, that AI systems which automatically produce legal assessments and have significant influence on procedural outcomes will tend to be classified as high-risk.

Applications that may be affected include:

  • Predictive legal analytics (litigation outcome forecasting)

  • Automated contract risk analysis used as a binding decision basis

  • AI-powered due diligence tools where outputs feed directly into transaction decisions without human review

Pure assistance functions - AI that supports lawyers without making autonomous decisions - are likely to be classified at a lower level. Practice will clarify this over time.

What this means in concrete terms for Swiss law firms and in-house teams

Even though Switzerland is not an EU member state, there are three immediate points of contact.

First: tool selection. Anyone using legal AI tools for EU mandates or on projects instructed by EU clients must check whether the provider meets EU AI Act requirements. This covers transparency obligations, documentation, and the existence of human oversight mechanisms.

Second: client advice. Many Swiss companies are themselves bound by the EU AI Act if they deploy AI systems while serving EU customers. Law firms are increasingly being asked to classify systems, produce documentation, and support conformity processes.

Third: own liability exposure. Passing on AI-generated outputs without sufficient human review can raise liability questions - including under Swiss law, where an EU regulatory standard may be used as a reference point.

How CASUS addresses these requirements

CASUS, a Swiss legal AI platform built for law firms and in-house legal teams in Switzerland, hosts all data in Switzerland and the EU, with no data transfer to the US. Zero Data Retention and no third-party Human Review are part of the core architecture - not optional add-ons.

By design, CASUS functions as an assistance system: it supports lawyers with contract analysis, comparison against playbooks or market standards, and legal research - without making autonomous legal decisions. Human review remains mandatory throughout, which aligns with the spirit of the EU AI Act's high-risk requirements.

The AI Data Room, for instance, allows the parallel analysis of dozens or hundreds of documents based on user-defined fields - the result is a structured table, not an automated ruling. The Risk & Quality Review identifies risks and weaknesses in contracts, prioritises them by severity, and suggests drafting options - the decision stays with the lawyer.

For due diligence processes involving many documents, where the EU AI Act raises classification questions, this architecture provides a substantive answer: the AI analyses, the human decides.

Practical steps for law firms and legal teams

Four measures offer a solid starting point for firms preparing now.

Inventory of AI tools in use. Which tools are deployed? Are they pure assistance functions, or do their outputs flow into decisions without further review?

Provider assessment. Does the provider supply technical documentation? Where is data processed? Are there clear statements on human review and data retention?

Positioning for client advice. EU AI Act compliance is a new topic for many Swiss companies with EU business. Law firms that build this advisory capability now gain an early advantage.

Documenting internal governance. Even without EU membership, an internal policy on AI use makes sense - it clarifies liability questions and demonstrates to clients that AI is used in a controlled manner.

Getting started with CASUS

Law firms and legal teams that want to use legal AI without US data transfer, with Zero Data Retention and a clear assistance model, can try CASUS for free. The platform is available as a Word add-in and a web app - no complex setup required.

Start free trial

Details on data security and the technical framework are available on the CASUS security page.

FAQ

Does the EU AI Act apply to Swiss law firms?

Yes, where Swiss law firms use AI tools for EU mandates or advise clients with an EU nexus, the extraterritorial provisions of the EU AI Act apply - the mechanism mirrors the GDPR. The location of the provider alone offers no protection.

Which legal tech applications are considered high-risk under the EU AI Act?

AI systems that prepare legally relevant decisions and have significant influence on procedural or transaction outcomes tend to be classified as high-risk. Predictive legal analytics and automated contract risk analysis used without human review fall into this category.

What must providers of high-risk AI systems demonstrate?

Providers must supply technical documentation, maintain a risk management system, ensure data quality, enable human oversight, and complete a conformity assessment before placing the system on the market.

Are AI assistance systems for lawyers also affected?

Pure assistance systems - where the lawyer retains control and autonomous AI decisions are excluded - tend to be classified at a lower risk level. The precise classification depends on the actual use case, however, not just the technical architecture.

What is the difference between the EU AI Act and the GDPR for legal tech users?

The GDPR governs the protection of personal data. The EU AI Act governs the development, placement on the market, and operation of AI systems according to risk categories. Both frameworks can apply simultaneously - anyone processing personal data with AI must comply with both.

How can a law firm check whether its legal AI tool is EU AI Act compliant?

Key questions are: Where is data processed? Is technical documentation available? Is human review built in? How is the system classified? Providers that cannot give clear answers to these questions are a risk signal.

Does it matter if a legal AI tool is hosted in Switzerland?

The hosting location alone is not decisive. What matters is whether the system is used in the EU or whether its outputs have legal effects within the EU. A Switzerland-hosted tool used for EU mandates can still fall within the scope of the EU AI Act.

When do the high-risk requirements of the EU AI Act become binding?

The prohibitions on unacceptable-risk systems applied from February 2025. The requirements for high-risk systems under Annex III will be fully applied from August 2026. The preparation window for providers and operators is running now.

Casus Logo

Verträge auf Autopilot. Mit CASUS.

Capterra Logo
Innosuisse Logo
Venture Kick Logo
HSG Spin Off Logo

CASUS Technologies AG

Uraniastrasse 31

8001 Zurich

Switzerland

Copyright ©2025 CASUS Technologies AG — All rights reserved.

Linkedin Icon
Youtube Icon
Casus Logo

Verträge auf Autopilot. Mit CASUS.

Capterra Logo
Innosuisse Logo
Venture Kick Logo
HSG Spin Off Logo

CASUS Technologies AG

Uraniastrasse 31

8001 Zurich

Switzerland

Copyright ©2025 CASUS Technologies AG — All rights reserved.

Linkedin Icon
Youtube Icon
Casus Logo

Verträge auf Autopilot. Mit CASUS.

Capterra Logo
Innosuisse Logo
Venture Kick Logo
HSG Spin Off Logo

CASUS Technologies AG

Uraniastrasse 31

8001 Zurich

Switzerland

Copyright ©2025 CASUS Technologies AG — All rights reserved.

Linkedin Icon
Youtube Icon