The image of a junior lawyer working through stacks of contracts late into the night, manually building risk lists, has not fundamentally changed at many Swiss law firms. But the pressure is building. AI tools are taking over tasks that once made up the bulk of an associate's training time. Law firms that do not adapt associate training for the AI age are educating lawyers for work that will rarely need to be done manually within a few years.
This is not a question of enthusiasm for technology. It is about which skills are genuinely needed today – and which ones still require real human mastery because AI cannot reliably replace them.
What has already changed in contract work
Contract review, risk analysis, and checking standard clauses have traditionally been core tasks for associates in their early years. This is precisely where legal AI is making inroads. Platforms like CASUS, a Swiss legal AI platform, can review a contract for risks and red flags, prioritize findings by severity, and deliver drafting suggestions directly in Word – without copy-paste and with correct formatting.
That does not mean associates no longer need to understand these tasks. It means they need to learn how to assess, refine, and take responsibility for AI outputs. Anyone who does not know how to evaluate a liability clause without a cap cannot correctly interpret an AI finding either.
The shift is less about substitution than about reallocation – away from mechanical first-pass review, toward qualified legal judgment.
Which skills are genuinely needed now
Critical thinking about AI outputs
The most important skill first: AI tools produce structured, source-based results – but not guaranteed correct ones. Associates need to learn how to systematically question AI outputs. Does the risk assessment align with the actual legal position? Is the cited case law relevant? Is something missing because it was never prompted?
Critical thinking toward AI is not a philosophical exercise. It is a practical legal skill expressed in concrete review steps: source check, completeness control, plausibility test.
Prompt skills as a legal work technique
A vague question put to an AI system produces a vague answer. Well-structured prompts – with context, the specific question, party position, and desired output format – produce usable results. That sounds like a technical skill, but at its practical level it is legal precision: the same ability needed to write clear pleadings and contract clauses.
Associates who learn how to formulate a legal research prompt for a liability question under OR 97 ff. are simultaneously learning how to structure the legal question itself. Prompt engineering and legal thinking are not opposites here.
Contract and document review with AI support
The ability to conduct and take responsibility for an AI-assisted contract review is becoming a standard competency. Concretely: working with a risk and quality review module, evaluating findings, selectively accepting suggestions, and being able to explain the analysis to a client. Anyone who knows the workflow – from risk analysis to improvement recommendation – can work more efficiently and better articulate what was reviewed and what was not.
The same applies to the Benchmark workflow: checking an NDA or SPA against an internal standard and assessing deviations is a competency that previously required significant experience. With structured AI support, that judgment can be developed earlier in a career – provided the legal ability to interpret results is in place.
Legal research with AI databases
Legal research is changing fundamentally. What previously meant hours in a database can now mean a well-formulated prompt across more than 660,000 cantonal and federal court decisions, with the relevant reasoning sections visible directly in the search results – no need to open each decision individually.
CASUS's legal research module produces structured first assessments with risk drivers, pro and con argument lines, and concrete recommendations. For associates, this means the ability to produce a source-based first assessment arrives earlier in their careers. But the ability to place that assessment in legal context and work with it remains human.
Formal and linguistic document quality
AI also handles proofreading tasks: spelling, grammar, consistency of terminology, cross-references, missing definitions, placeholders. The Proofread module checks, for example, Swiss spelling conventions, annexes, and numbering – taking work off the table that is necessary but time-consuming.
The goal is not a legal judgment on the legal situation, but a clean, consistent document. Associates need to understand this distinction too: what AI proofreading covers and where it ends.
How firms can adapt associate training
Structured introduction to AI workflows
Simply providing an AI tool without onboarding achieves little. Associates need clear explanations of what each module is suited for, where its limits lie, and how to interpret outputs. A structured introduction to the review workflow – using a real contract example – delivers more than an onboarding document ever will.
Case-based learning remains indispensable
There is a concern that case-based learning loses relevance once AI takes over the first-pass analysis. This is only partly true. Assessing an AI output requires understanding the subject matter. Someone who has never learned to read a liability clause cannot judge whether an AI finding is correct.
Case-based learning remains relevant – but it shifts from mechanical first-pass reading to judgment formation. Associates should form their own hypothesis before seeing the AI result, then evaluate what the AI got right and where it was wrong or incomplete.
Data protection and security as a training element
In Switzerland particularly, handling personal data in AI contexts is regulatorily sensitive. Associates need to know which data may flow into which systems. CASUS hosts data in Switzerland and the EU, transfers no data to the US, and operates with no human review and zero data retention – these are not marketing points but relevant criteria for use in client-related contexts.
Data security belongs in associate training, not just in IT onboarding. More on this at Security & data residency.
From legal training to legal engineering
The term "legal engineer" is not yet standard in Swiss job postings. But it describes a direction that is taking shape: lawyers who think legally and can also design processes, deploy tools, and structure outputs.
This is not a departure from legal substance. On the contrary – taking responsibility for AI outputs requires solid legal foundations. But the tools through which those foundations are applied have changed. Associate training in the AI age means learning both.
Try CASUS in practice
Anyone who wants to see what AI-assisted contract work looks like in daily practice can test CASUS for free. The platform runs as a web app and as a Microsoft Word add-in – no data transfer to the US, with hosting in Switzerland and the EU.
FAQ
What do associates need to learn differently in the AI age?
Associates primarily need to learn how to critically evaluate AI outputs: are the identified risks correct? Is something missing? Prompt skills – the ability to instruct AI systems precisely – become a regular work technique. At the same time, foundational legal knowledge remains indispensable, because it underpins every assessment of AI results.
Does AI replace the traditional associate training path?
No. AI takes over mechanical first-pass reviews and structured analysis, but not legal judgment. Training needs to adapt – away from manual first-pass review, toward the competency to take responsibility for AI outputs and work with them further.
Which AI skills are most relevant for lawyers?
Practically relevant skills include: formulating prompts for legal tasks, evaluating AI-assisted risk analyses, using legal research tools with case law databases, and understanding where AI proofreading ends and legal review begins.
What does AI-assisted legal research look like?
Platforms like CASUS search across more than 660,000 cantonal and federal court decisions and produce structured assessments with source references, argument lines, and recommendations. The relevant reasoning sections are shown directly in the search result, without having to open each decision manually.
How should law firms adjust associate training?
A structured introduction to specific AI workflows – not just tool access – is advisable, alongside case-based learning that uses AI as a benchmark rather than as the first analyst. Explicit training on data protection questions around AI use is also necessary, particularly which data may be entered into which systems.
Why is data security relevant in associate AI training?
Associates work with client-related, often confidential documents. They need to know which AI systems may be used in a data-compliant way. In Switzerland, the DSG applies, and depending on the matter, further requirements may exist. Systems that transfer data to the US or use documents for model training are not permissible in many contexts.
What is the difference between AI proofreading and legal document review?
AI proofreading checks language, consistency, formatting, cross-references, and placeholders. It does not assess whether a clause is legally correct or appropriate. A legal document review evaluates the legal position, risk allocation, and contract structure. Both functions complement each other – they do not substitute for one another.
Does this apply to in-house legal teams as well?
Yes. The shift in competencies affects in-house legal teams just as much as law firms. In fast-growing companies with high contract volumes, AI support can free up significant capacity – provided the team knows how to evaluate and use the results.







