Alex Denne
Growth @ Genie AI | Introduction to Contracts @ UCL Faculty of Laws | Serial Founder

Prompt Engineering for Legal AI

18th December 2024
3 min
Text Link

Note: This article is just one of 60+ sections from our full report titled: The 2024 Legal AI Retrospective - Key Lessons from the Past Year. Please download the full report to check any citations.

Prompt Engineering

"There are potentially large obstacles for automating legal work. Using a legal practice guide for RAG produces inconsistent results, and there are no consensus prompt decomposition strategies which achieve a 'best' overall performance consistently"

Colin Doyle, Associate Professor of Law, Loyola Law School, Los Angeles, USA

A prompt is the natural language text given to an LLM with the aim of getting the desired output.

The quality of the prompt can drastically affect the quality and accuracy of the answers of an LLM. This has led to the emergence of a field called prompt engineering.

Prompt engineering refers to the systematic design and optimization of input prompts to guide the responses of LLMs, ensuring accuracy, relevance, and coherence in the generated output.

This process is vital to harness the full potential of the models (B. Chen et al. 2024). There are many prompt engineering techniques:

Role-based priming: This is like telling the AI to pretend it's a specific type of lawyer or judge. It helps the AI understand what kind of answer you need.

Goal-oriented priming: This is when you tell the AI exactly what you want to achieve, like writing a contract or analyzing a case.

Chain-of-thought prompting: This is like asking the AI to show its work, step-by-step, which is great for complex legal reasoning.

Few-shot prompting: This is when you give the AI a few examples of what you want, which helps it understand tricky legal concepts better.

Specificity and precision: Using clear, detailed instructions and legal terms helps the AI give more accurate answers.

Providing context: Giving background information helps the AI understand the legal situation better.

RICE method: This stands for Role, Instructions, Context, and Expectations, which helps structure your questions to the AI.

An entire report could be dedicated just to exploring the effectiveness of each of these techniques. Instead of doing that, we encourage you to take some time to look up examples of each in the legal domain, and find what works best for you and your use case.

Interested in joining our team? Explore career opportunities with us and be a part of the future of Legal AI.

Related Posts

Show all