Questions and answers about
the economy.

How is generative artificial intelligence changing the legal profession?

Lawyers have traditionally been slow to embrace technology, with courtroom practices remaining largely the same for centuries. The arrival of generative AI is set to change all that.

The core functions of lawyers’ practice – providing legal advice and representing clients in court – are deeply rooted in natural language, particularly in the specialised subset known as legal language.

Historically, computers have excelled in processing numerical data but struggled to grasp the nuances of human language. This is one of the main reasons why the legal profession has experienced relatively less transformation than many other industries. Comparably, in finance, computers execute complex trading algorithms with precision and speed; while in medicine, robotic systems assist in performing intricate surgery.

What is generative AI?

The arrival of ChatGPT – created by OpenAI in 2022 – marked a significant turning point, bringing generative artificial intelligence (AI) into the limelight.

Generative AI encompasses a category of AI systems designed to create new content – such as text, images, audio and video – by learning from existing data. These large language models (LLMs) operate by predicting the next element in a sequence, enabling them to generate coherent and contextually relevant outputs.

What sets generative AI apart from traditional AI technologies is its remarkable ability to perform cognitive tasks in natural language. This capability has profound implications for the legal profession, which relies heavily on language for tasks such as drafting documents, conducting legal research and providing complex advice.

As generative AI continues to evolve, it could fundamentally transform the landscape of legal practice. In addition, it has the potential to enhance access to justice.

Legal services are often expensive and out of reach for many individuals. Generative AI can provide basic legal advice and assistance to those who cannot afford traditional legal services.

Recently, the courts in New Zealand and Queensland, Australia, have issued guidelines to help self-represented litigants to use generative AI properly and ethically.

How might generative AI help lawyers?

Before the arrival of generative AI, there were already many AI tools that could help lawyers, such as document review software, e-discovery tools and predictive coding applications.

The key difference between these traditional technologies and generative AI is the latter’s ability to understand and generate natural language, akin to a junior lawyer. Consequently, generative AI tools can perform a wider range of tasks more efficiently and accurately than their predecessors.

Here are three examples of the enhanced capabilities of generative AI.

Document drafting and review

Generative AI can significantly enhance the drafting and reviewing of legal documents – tasks that often involve repetitive and meticulous work. Tools like Spellbook and Juro can produce initial drafts based on predefined templates and specific client requirements, allowing lawyers to concentrate on more complex and strategic aspects of contracts.

Legal research

One of the most significant impacts of generative AI on the legal profession is in the area of legal research. Traditionally, this has been a time-consuming process involving the review of vast amounts of case law, statutes and legal literature.

While search engines can speed up this process, they use keyword searches, often producing irrelevant results. Generative AI can streamline this process by quickly analysing large datasets, identifying relevant precedents and summarising key points.

This not only saves time but also ensures that legal professionals have access to the most pertinent information, thereby enhancing the quality of legal advice. For example, AI-powered platforms like Westlaw Edge and Lexis+ offer predictive research suggestions and advanced analytics, reducing the time that lawyers have to spend on research tasks.

Advisory

In terms of adopting AI in law, arguably the holy grail is advisory: AI being able to produce complex and nuanced advice and arguments in the same way as an experienced lawyer does.

Very often, such advice and arguments cannot be easily found in existing materials, and lawyers have to rely on their own expertise and insights to construct innovative and convincing arguments.

One approach to transforming generative AI into a true legal expert is through custom training, also known as fine-tuning.

Fine-tuning a general AI programme into a legal expert involves several key steps: identifying specific legal tasks; collecting extensive domain-specific data; and customising the model to handle complex legal reasoning.

This process ensures that the AI programme can understand and generate relevant legal content. For example, Harvey, a generative AI platform, has partnered with OpenAI to create a custom-trained case law AI model.

During the training process, Harvey and OpenAI collected vast amounts of case law data and refined the model's ability to perform tasks such as analysing complex litigation scenarios. By doing so, Harvey enhanced the AI's proficiency, resulting in outputs that legal professionals overwhelmingly preferred for their accuracy and depth.

What are the challenges?

Generative AI has the potential to transform the legal industry, but it comes with notable limitations.

One critical flaw of LLMs is hallucination. Specifically, a generative AI model can produce text that appears credible but turns out to be factually incorrect. This is extremely problematic in legal contexts where precision is crucial.

In 2023, a US attorney filed a legal brief that he had written with the help of ChatGPT. The document included citations to several cases that seemingly offered precedents that supported his client’s position, but actually did not exist. They had been dreamed up by ChatGPT.

Another problem is incompleteness, where responses produced by a generative AI model fail to address the user’s query properly or provide proper case citations. Obviously, these responses will either be of no use to the user or require the user to add citations manually before they can be used in court.

Various ways to address these issues have been proposed. One approach is called retrieval-augmented generation (RAG). Under this approach, a knowledge base (for example, a set of legal documents) is added to the AI model, which will first search and retrieve relevant information from the knowledge base before generating a response.

A recent study shows that RAG offers an unsatisfactory solution to the problems of hallucination and incompleteness (Magesh et al, 2024). While RAG does reduce hallucinations, a substantial level of hallucination remains. As for incompleteness, RAG models perform even worse than a general model.

Figure 1: Comparison of hallucinated and incomplete answers across generative legal research tools

Source: Magesh et al, 2024
Note: Hallucinated responses are those that include false statements or falsely assert a source supports a statement. Incomplete responses are those that fail either to address the user’s query or provide proper citations for factual claims.

Another problem with generative AI is a lack of consistency in its responses. For example, if you ask ChatGPT whether 9.11 is bigger than 9.9, it may say that 9.11 is bigger (which is obviously wrong, a problem on its own). But equally troubling is that, if you ask ChatGPT again, it may give you a different answer, saying that 9.11 is smaller than 9.9.

This kind of inconsistency can confuse users and undermine trust in the model's reliability. Researchers have identified several possible reasons for this behaviour. In particular, generative AI models produce responses based on the probabilistic nature of their training data, which can lead to variability in answers, especially for ambiguous or complex queries.

Finally, the legal domain provides a unique practical challenge when it comes to training generative AI models. Unlike many other areas of domain knowledge, law is jurisdiction-specific: each country having its own set of legislation and case law.

As a result, an AI trained on the laws in one jurisdiction will not be very useful in another. In contrast, an AI doctor or financial adviser who is trained on the data in one country can be easily deployed elsewhere. Training jurisdiction-specific AI models leads to increased costs and time.

Where can I find out more?

Who are experts on this question?

  • Richard Susskind
  • Daniel Martin Katz
  • David Wilkins
Author: Benjamin Liu, University of Auckland
Image: BrianAJackson on iStock
Recent Questions
View all articles
Do you have a question surrounding any of these topics? Or are you an economist and have an answer?
Ask a Question
OR
Submit Evidence