This section explores the role of artificial intelligence and digital tools within legal processes, with a focus on how technology interacts with responsibility, decision-making, and access to justice. It examines AI as a support tool rather than a decision-maker, particularly in the context of self-representation and court-based procedures.

Content in this category addresses both the opportunities and risks associated with using AI in legal contexts, including issues of accuracy, accountability, and ethical use. It is intended to help litigants in person and legal professionals understand how technology can assist with preparation and understanding without replacing human judgment or procedural responsibility.

The “Vibe Lawyer” Moment: AI, Litigants in Person, and the Coming Shockwave for the Family Courts

Litigants in person are being called “vibe lawyers” for using AI to draft complaints and court documents. But behind the headlines lies a harder truth: people are turning to artificial intelligence because they cannot afford representation in an increasingly complex and overstretched justice system. Judges are right to be concerned about fake citations and procedural errors. Yet dismissing AI use outright misses the deeper issue — access to justice has been under strain for years, and technology is now filling the gap.

The “Vibe Lawyer” Moment: AI, Litigants in Person, and the Coming Shockwave for the Family Courts

By Jessica Susan Hill | JSH Law

Key Takeaways (Read This First)

  • AI is already changing litigation behaviour — the judiciary is explicitly preparing for a surge in AI-generated claims across civil, family and tribunals.
  • The risk isn’t “AI” — it’s unverified AI: fabricated authorities and confidently wrong submissions waste court time and damage credibility.
  • LiPs are not “wreaking havoc” for fun. Many are doing what they must to participate in a system they cannot afford to navigate with representation.
  • The solution is guardrails, not barriers: verification standards, procedural literacy, and responsible workflows that help the court as well as the litigant.
  • Family proceedings are high-stakes. Used properly, AI can improve clarity and evidence organisation; used badly, it can derail safeguarding analysis and case management.

1. Why this matters now

“Vibe lawyers” is a catchy label, but it risks obscuring a far more serious reality: litigants in person are using AI tools to draft complaints, defences, witness statements and skeleton arguments at scale — and the courts are already feeling the impact. The phenomenon is now so visible that Sir Geoffrey Vos (Master of the Rolls, Head of Civil Justice) has explicitly warned that the judiciary must prepare for an “AI revolution” that may vastly increase the number of civil, family and tribunal claims the justice system must manage. His speech is worth reading in full. :contentReference[oaicite:0]{index=0}

Let’s be direct: the justice system in England and Wales is already stretched. Many court users already experience the process as opaque, intimidating and unaffordable. That is not a personal failing of litigants — it is a structural reality. AI is entering a pressure-cooker and magnifying what was already there: information asymmetry, procedural complexity, delay and the gulf between a represented party and an unrepresented one.

So, yes — judges and practitioners are right to be concerned about inaccurate AI-generated material clogging lists and adding burden to judges who are already firefighting. But it is also true that, in the medium term, AI could become one of the most significant access-to-justice tools we have ever seen. Both truths can exist at once.

2. The judiciary is not guessing — it is responding to lived reality

We are past the point of theoretical debate. The judiciary has been issuing speeches and guidance precisely because AI use is now operationally relevant. Beyond speeches, the Judicial Office has published updated guidance addressing risks including confidentiality, bias and “hallucinations” — where AI produces plausible but incorrect information. The October 2025 judicial guidance explicitly flags the danger of fictitious citations and misleading legal content. :contentReference[oaicite:1]{index=1}

Sir Geoffrey Vos has also repeatedly articulated a simple “core rules” approach: understand what the tool is doing, do not upload private/confidential data into public tools, and check the output before using it for any purpose. He set that out again in October 2025. :contentReference[oaicite:2]{index=2}

This is not anti-technology. It is the judiciary doing what it should do: protecting the integrity of the process while acknowledging that new tools are changing behaviour.

3. The real problem: “confidently wrong” submissions

Generative AI tools can draft impressive text quickly. But they do not “know” the law. They predict language. That difference matters profoundly in litigation. A well-written paragraph that contains an invented case, a misquoted statute or an inaccurate procedural route is not merely unhelpful — it can actively undermine a party’s credibility and force the court to spend additional time cleaning up the mess.

The legal profession has already seen what happens when verification fails. In June 2025, the Divisional Court (Dame Victoria Sharp P and Johnson J) dealt with the now widely-reported “fake authorities” problem in Ayinde v London Borough of Haringey and Al-Haroun v Qatar National Bank, where false citations and inaccurate quotations were placed before the court, with suspected or admitted use of AI tools without proper checks. The judgment is publicly available and makes required reading for anyone tempted to treat AI output as “good enough”. :contentReference[oaicite:3]{index=3}

Importantly, that judgment is aimed at lawyers — because professionals are held to professional standards. But the underlying point applies to everyone: accuracy is non-negotiable in court work. You can be passionate, traumatised, exhausted, and still required to file documents that are factually and legally sound.

4. Why litigants in person are using AI (and why the “money pit” narrative is wrong)

Many litigants in person feel they are treated as an administrative inconvenience — or worse, as a “cost centre” rather than a rights-holder. I understand why that perception forms. The system can be brutal: forms, deadlines, practice directions, directions hearings, orders you must interpret and comply with under stress. In private law children proceedings, you may be trying to protect a child, manage safeguarding concerns, and preserve your own mental stability while preparing documents that lawyers train for years to produce.

For a growing number of people, AI has become the first accessible “translator” of legal language. It can explain terminology, propose a structure for a statement, generate headings for a skeleton argument, and help a person who feels overwhelmed take a first step. That is why it feels like a shake-up. It is not because LiPs are trying to harm the system. It is because they are trying to participate in it.

And here is the hard truth: if access to representation continues to shrink in practice — whether by cost, availability, or scope — more people will use AI. That is not something a press headline can reverse. It is a reality the system must incorporate.

5. Family court is the pressure point

Family proceedings are where AI misuse can become most dangerous, because the stakes are often immediate and human: the child’s living arrangements, contact, safeguarding, allegations of domestic abuse, coercive control, substance misuse, mental health, relocation, schooling — the list is endless.

Private law children cases are ultimately governed by the welfare principle in the Children Act 1989, section 1. The court’s job is not to reward the best writer. It is to determine what best meets the child’s welfare needs. But poor drafting can still distort the court’s understanding of what matters. :contentReference[oaicite:4]{index=4}

And family procedure is its own ecosystem. The Family Procedure Rules and associated Practice Directions are not optional reading; they are the architecture of how your case moves through the system. PD12J (domestic abuse and harm) is particularly critical where abuse is alleged, because it shapes fact-finding decisions, safeguarding analysis and protective measures. :contentReference[oaicite:5]{index=5}

Where AI is used badly in family court, I commonly see the same patterns (and judges see them too):

  • Misstating legal tests (e.g., confusing civil and criminal standards, or quoting the wrong threshold framework).
  • Over-inclusion: 30-page narratives where only a small percentage is evidentially relevant.
  • Inflammatory language that escalates conflict rather than centring the child.
  • Procedural fantasy: “applications” and “orders” that do not exist or are not procedurally available.
  • Fake authority: citations that sound real but are not verifiable.

Those problems do not just “waste time”. They can change outcomes. They can harden judicial perceptions. They can reduce a litigant’s credibility. And in safeguarding contexts, credibility matters.

6. But here is the opportunity: structured AI use can help the court

Now for the other side of the ledger, which the “vibe lawyer” framing often ignores.

Used properly, AI can reduce noise and increase clarity. It can help an overwhelmed litigant present their case in a way that judges can actually work with. It can support:

  • Chronology building (dates, events, orders, and key turning points).
  • Document organisation (indexes, exhibit lists, consistent naming).
  • Issue framing (what is the dispute actually about?).
  • Drafting clarity (headings, structure, neutral tone).
  • Summarising communications (WhatsApp/SMS/email) into court-usable bundles.

Those are not cosmetic benefits. They are directly aligned with what the court needs: efficient case management, focused evidence, and parties who can articulate relevant issues.

In other words: the best version of AI in litigation is not “AI replaces lawyers.” It is “AI helps people present usable material so the court can do its job.” That is the access-to-justice promise.

7. The non-negotiable: verification

The line between empowerment and chaos is verification.

Professional regulators have been clear that AI cannot be trusted to judge its own accuracy. The SRA has warned about hallucinations and the risk of plausible but incorrect outputs, including non-existent cases. :contentReference[oaicite:6]{index=6}

For court users, this translates into a simple operating standard:

  • If you cite it, you must be able to prove it exists (case name, neutral citation, and a reliable source).
  • If you quote a statute, check it on legislation.gov.uk (not in an AI chat box).
  • If you refer to rules or practice directions, check the official source (FPR/CPR/PD pages).
  • If it sounds “too perfect”, slow down — AI is very good at confidence, not always good at truth.

After the June 2025 “fake authorities” judgment, the direction of travel is obvious: courts will increasingly treat fabricated or careless citations as serious misconduct where professionals are involved, and as a significant credibility issue where litigants are involved. :contentReference[oaicite:7]{index=7}

8. A real-world cautionary tale: Mata v Avianca

Even outside the UK, courts have reacted strongly when lawyers filed AI-generated fake authorities. The widely-cited US case Mata v Avianca resulted in sanctions after fabricated case citations were submitted. It is not “UK law”, but it is a stark illustration of what happens when verification collapses. :contentReference[oaicite:8]{index=8}

Why mention it here? Because the underlying professional lesson travels: courts do not have time for invented law, and they should not have to spend scarce judicial time correcting avoidable errors.

9. What this means for litigants in person

What This Means for LiPs (Practical Guidance)

1) Use AI to organise, not to “source” law. AI is excellent for structure, headings, summaries, chronologies and drafting tone. It is unreliable as a sole source of legal authority.

2) Keep it child-focused (family cases). Remove insult, speculation and “character assassination”. Judges need facts, evidence, and impact on the child.

3) Treat every AI output as a draft. You are responsible for what you file. Read it. Edit it. Make sure it matches your evidence.

4) Verify every citation. If you cannot open the case or locate it on a reputable database, do not rely on it.

5) Don’t upload confidential material into public AI tools. Safeguarding details and private communications should be handled carefully. Follow the Judicial Office warnings on confidentiality. :contentReference[oaicite:9]{index=9}

6) Aim for shorter, clearer documents. Judges do not reward length. They reward relevance. A focused 6–10 pages often lands better than a sprawling 30.

7) If you’re stuck, get human oversight. A short consultation to sanity-check structure, compliance with directions, and relevance can prevent months of damage.

10. What this means for the justice system: guardrails, not barriers

If the system responds to AI by “closing ranks” and shaming litigants, it will fail. People will still use AI — but they will do so in worse, more chaotic ways. A better approach is to develop common standards that increase quality and reduce burden.

In practice, that means three things.

A) Judicial clarity

Courts and judiciary leadership can help by setting clear expectations about what is acceptable in written submissions — particularly around citation verification and disclosure of AI use where relevant. The Judicial Office guidance is already laying the foundation here. :contentReference[oaicite:10]{index=10}

B) Procedural literacy for court users

Most problems I see are not “bad people”. They are overwhelmed people. The system needs short, accessible, official pathways explaining (for example) what a directions hearing is, how to comply with an order, how to prepare a bundle, and how to draft a witness statement that is relevant rather than reactive.

C) Responsible support models

This is where the best “shake up” lies: hybrid support that uses AI to accelerate organisation and drafting, with human oversight to ensure compliance, accuracy, relevance and tone. That model benefits everyone: the litigant, the other party, and the court.

11. A note on professional standards (and why it still matters to LiPs)

When professionals file inaccurate material, the consequences can be severe, including regulatory referral. That was made explicit in the June 2025 judgment dealing with false citations. :contentReference[oaicite:11]{index=11}

LiPs are not held to the same professional code — but the practical consequences can still be harsh: credibility erosion, judicial impatience, adverse costs risks in some contexts, and (most importantly) a judge simply not trusting what they are reading. In family court, loss of credibility can be profoundly damaging.

This is why “AI literacy” is not an academic luxury. It is a procedural survival skill.

12. Conclusion: the future is responsible AI, not no AI

AI is in the courtroom ecosystem now. The judiciary is preparing for it. Regulators are warning about it. The profession is adapting to it. The question is not whether litigants in person will use AI — they already are.

The question is whether we will build a culture of responsible use.

Used recklessly, AI produces noise: invented authorities, misunderstood legal tests, and sprawling submissions that burden the court. Used properly, it can produce clarity: structured chronologies, coherent statements, and focused issues that help the court get to the real substance of the case.

If we care about access to justice, we cannot treat litigants in person as an administrative irritation. We should treat them as court users with rights and responsibilities — and we should equip them with tools and guardrails that allow them to participate meaningfully.

That is the “AI revolution” that matters: not chaos, but capability.


Useful Official Resources

If you want structured, responsible help using AI to prepare court documents (without risking accuracy or credibility), you can book a short consultation below:


Regulatory & Editorial Notice (JSH Law): This article is published for general information and public-interest commentary only. It does not constitute legal advice and should not be relied upon as such. Where this article refers to third-party sources (including court judgments, guidance, regulator publications, media reporting, or external organisations), those references are provided for context and convenience; JSH Law does not control or endorse third-party content and cannot guarantee its accuracy, completeness, or continued availability. Court users should always consult the original primary sources (including the Family Procedure Rules, Practice Directions, and judgments) and obtain appropriate professional advice for their specific circumstances.

When Court Data Disappears: Why Transparency in Family Courts Matters More Than Ever

In February 2026, the Ministry of Justice ordered the removal of a major archive of court listing data, citing data protection concerns and alleged misuse involving AI. On the surface, it looked like a dispute about compliance. In reality, it raises a far more serious question: what happens when the justice system becomes less visible? For families navigating private law disputes, safeguarding allegations and prolonged delay, transparency is not a political slogan — it is the difference between understanding how the system works and feeling powerless within it.

Key points (read this first)

  • “Open justice” is not a vibe. It is a constitutional principle: the public must be able to see justice being done — in practice, not just in theory.
  • The Courtsdesk database mattered because it made magistrates’ court activity discoverable at scale — across regions, trends and time — in a way ordinary listings often do not.
  • The MoJ/HMCTS position has centred on data protection and alleged unauthorised sharing with an AI third party (including potentially sensitive identifiers). That is a serious issue — but it doesn’t automatically justify a “delete the archive” outcome.
  • There is now a live policy tension: privacy compliance vs public scrutiny. The correct answer is not to pick one. It is to design lawful access with safeguards.
  • AI changes the stakes. It can expose systemic court failures (delays, inconsistency, outcomes), but it can also amplify privacy harm if governance is weak.
  • What to watch next: licensing frameworks, official listing portals, retention/archiving rules, and whether any independent oversight is built into the “new” regime.

If you only have 60 seconds: the question isn’t “should court data exist?” — it’s “who controls access, under what rules, with what accountability?”

When Court Data Disappears: Courtsdesk, the MoJ Deletion Order, and What “Open Justice” Means in the AI Age

By Jessica Susan Hill | Legal Consultant & McKenzie Friend | JSH Law Ltd

In February 2026, a story surfaced that should make every lawyer, journalist and court-user sit up: the Ministry of Justice (via HMCTS) instructed a private platform, Courtsdesk, to delete what was widely described as the UK’s largest archive of court reporting data. The dispute was framed as a data protection breach involving AI. Critics called it a major blow to open justice.

This isn’t a niche media row. It’s a governance problem with a constitutional wrapper. Because once court information becomes searchable at scale, it becomes auditable. And once the system becomes auditable, it becomes accountable.

1) What happened — and why the link you saw may have “stopped working”

If you clicked a share link to a paywalled newspaper, you’ll often get a broken experience (or a login wall). But the underlying issue is very real: in early-to-mid February 2026, multiple sources reported that the MoJ/HMCTS instructed Courtsdesk to remove court listing/archival data from its platform. The matter was then debated in Parliament, with ministers stating that action was taken because of data protection concerns and alleged unauthorised sharing with an AI company.

In the House of Commons debate on 10 February 2026, the government position was put bluntly: HMCTS stopped sharing data and instructed the company to remove data from its digital platform because the government considered personal data had been put at risk and/or shared in breach of agreement. (Hansard: “Court Reporting Data”). Read the Commons debate (Hansard).

The House of Lords revisited similar themes on 11 February 2026, referencing alleged sharing of “private, personal and legally sensitive information” with a third-party AI company, including potentially addresses and dates of birth of defendants and victims. Read the Lords debate (Hansard).

Meanwhile, journalist bodies and open justice advocates argued that the deletion demand would reduce practical visibility of magistrates’ courts — the engine room of criminal justice — and undermine reporting capacity nationwide. NUJ response (11 Feb 2026).

Subsequent coverage indicated that the government later paused the deletion/purge approach and explored alternative licensing or arrangements, following significant public pressure and campaigning (including within national media). One example: The Times: MoJ halts purge of court archive (published Feb 2026). (Paywalled, but relevant for context and sequence.)

2) What is Courtsdesk — and why journalists cared

Courtsdesk is typically described as a platform that made it easier for journalists to discover and track magistrates’ court hearings — and to keep a searchable archive of what had been listed. The word “archive” matters. Without it, reporting becomes a daily scramble: you can see “today’s” list (sometimes), but you cannot easily analyse what happened across a month, a year, or a decade, and you cannot robustly check what patterns repeat across courts.

That changes the reporting model. Instead of “we got a tip and attended a hearing”, journalists can ask structured questions like:

  • Which courts are repeatedly listing the same offence type and outcome?
  • Are there geographical disparities in sentencing outcomes (controlling for offence and prior record)?
  • Is a particular safeguarding issue rising (domestic abuse, coercive control, breaches, stalking)?
  • Are certain hearings routinely not listed, listed late, or listed inaccurately?
  • Are “open” hearings being effectively closed by practical invisibility?

In short: a discoverable, searchable dataset turns open justice into something measurable. That is precisely why both open justice advocates and public interest reporters reacted so strongly.

For a short overview of the controversy as reported at the time: Legal Cheek (11 Feb 2026). For a more analytical legal-media perspective: Wiggin LLP commentary (16 Feb 2026).

3) The MoJ/HMCTS case: “data protection” and alleged sharing with AI

The government’s public position, as reflected in parliamentary statements, has been that data protection responsibilities were engaged. The allegation was not merely that the data existed, but that data was used or shared in a way that was not authorised by the relevant agreement — and that the information at issue could include sensitive personal identifiers.

In the Commons debate, MPs referenced the passing of information to an AI company, including addresses and dates of birth. You can read the relevant passages directly in Hansard: Court Reporting Data (Commons, 10 Feb 2026). The Lords debate similarly framed the core concern as sharing private/personal legally sensitive information with a third-party AI company: Court Reporting Data (Lords, 11 Feb 2026).

Let’s be clear: if victim or defendant identifiers were exposed or processed without a lawful basis, proper security, or appropriate contractual control, that is not a minor technicality. UK GDPR compliance is not optional — particularly where data could create direct risk (victim location, stalking risk, retaliation, intimidation, vigilante harm).

But there is a second question — and this is where policy and constitutional principles collide: even if a breach occurred, does the proportionate remedy have to be “delete the archive”? Or is the correct remedy:

  • Stop the unauthorised processing,
  • Investigate,
  • Implement governance, redaction, licensing and audit controls,
  • And preserve the public-interest value of the dataset?

In other regulated sectors, “burn the library” is rarely considered an intelligent response to a governance failure. You fix governance. You don’t erase institutional memory.

4) What “open justice” actually requires (and what it doesn’t)

“Open justice” is often described as a constitutional principle in common law: justice must be administered in public, with reporting permitted, because scrutiny is a safeguard against arbitrariness and abuse. It supports legitimacy and public confidence.

But open justice is not absolute. Courts can restrict reporting, anonymise parties, hold parts of hearings in private, or impose reporting restrictions where necessary and proportionate — especially to protect children, victims, national security, or the integrity of proceedings.

Here’s the practical point: open justice collapses when information is technically “available” but realistically undiscoverable. If court lists are incomplete, delayed, inaccurate, scattered, or accessible only through relationships and workarounds, then public scrutiny becomes selective and fragile.

A searchable archive changes the baseline. It doesn’t guarantee perfect scrutiny, but it makes scrutiny possible at scale.

The NUJ response captures the concern in direct terms: the state must take data protection seriously, but journalists are worried about the effect on their ability to do their job. NUJ: deletion order response.

5) The real issue: discoverability, not secrecy

Most people misunderstand how court reporting works. They think journalists can simply “look up” what is happening in court.

In practice, magistrates’ courts are high-volume. Hearings move. Lists change. Data may be published late, inconsistently, or in formats that are difficult to search. Court staff are under pressure. Press offices (where they exist) are stretched. The result is that what is formally “public” can become practically opaque.

So when people say “this undermines open justice,” they may not mean “the government is hiding a single case.” They mean: remove the infrastructure of discoverability and you reduce systemic scrutiny.

The wider concern is that once the system is not audited at scale, dysfunctional patterns persist:

  • Overlisting and adjournment churn;
  • Chronic delay;
  • Inconsistent listing practices;
  • Variable use of reporting restrictions;
  • Localised cultures that drift without challenge.

This is where AI becomes relevant — not as hype, but as a tool. AI is exceptionally good at extracting patterns from messy, fragmented data. And patterns are exactly what the justice system needs to be forced to confront.

6) AI: the uncomfortable accelerator of accountability

Here is the uncomfortable truth: AI makes “open justice” more powerful, because it can transform raw listings and outcomes into insight:

  • Where are outcomes diverging without explanation?
  • Which courts are systematically underperforming on timeliness?
  • Which offence types are rising or falling?
  • Do bail decisions correlate with geography in ways that look unjustified?
  • Are certain safeguarding concerns being deprioritised?

For the public, this can mean better scrutiny and informed reform. For institutions, it can feel like a loss of narrative control.

But AI also increases privacy risk. Aggregation is a form of power: data that is safe in one context can become dangerous in another when combined, enriched, or made searchable. That is why governance matters.

The question is not “AI or no AI.” It is: who is allowed to process court data with AI, under what licence, with what redaction, with what audit trail, and with what sanctions for misuse?

7) Data protection and open justice can coexist — if you design for both

If there was an unauthorised transfer of personal data to a third-party AI provider, that needs to be addressed. Strongly. But the correct fix is not necessarily deletion. The correct fix is a governance framework that takes seriously both:

  1. Lawful processing and security (UK GDPR; DPA 2018; contractual controls; access logs; DPIAs); and
  2. Open justice functions (discoverability; auditability; press access; public interest research).

A mature framework would include:

(A) Role-based access

Not everyone needs the same level of detail. A press-accredited journalist may need more than the general public. An academic researcher may need a structured dataset but not identifiers. A safety model is tiered access with clear rules.

(B) Default minimisation and redaction

Listings can be published in a way that is still meaningful but reduces harm: names may be necessary for open justice in many cases, but addresses and dates of birth generally aren’t. A “privacy by design” listing format is possible.

(C) Contractual control over processors

If AI tools are used, the relationship between controller and processor must be contractually controlled, audited, and limited. “Testing” is still processing. “Internal development” is still processing.

(D) Audit logs and sanctions

If a platform is given access to sensitive data, there must be a reliable audit trail and enforceable consequences for misuse.

This is the kind of approach the state should model. It’s what we demand of the private sector. The justice system should not be a governance laggard.

8) “Just use official channels” is not a sufficient answer

One argument raised in public discussion is that journalists can still access listings through official HMCTS channels, so the deletion of a private archive is not fatal.

Here’s the hard reality: official availability does not necessarily equal practical usability. The difference between:

  • a fragmented set of daily lists, and
  • a searchable, longitudinal archive

is the difference between “seeing a hearing” and “auditing a system”.

It’s the audit function that scares people — and it’s the audit function that reform needs.

For contemporaneous legal-sector analysis and a timeline-style overview, see: Wiggin LLP commentary.

9) The proportionality question: why “delete it” feels extreme

When government acts, it must act proportionately — especially when its actions collide with constitutional principles.

If the problem was a specific breach, a proportionate response normally looks like:

  • Stop the unlawful processing immediately;
  • Preserve evidence;
  • Investigate scope and impact;
  • Notify where legally required;
  • Fix governance;
  • Implement redaction and access controls;
  • Resume service under a compliant licence.

Deleting a historic archive can be justified in certain cases — for example, if the archive itself is irredeemably unsafe and cannot be lawfully held. But that is a high threshold. And if that threshold is met, the next question is: why was the data shared in that form in the first place, and why was it not already governed appropriately?

Open justice is a public asset. When you destroy an archive that underpins scrutiny, you don’t merely “solve” a compliance problem — you erase a public accountability mechanism.

10) What this means for litigants, victims and the public

This is not only about journalists. It touches:

Victims and vulnerable witnesses

Privacy matters. Safety matters. If addresses/DoBs are handled recklessly, it can cause real-world harm. A governance regime must centre safeguarding and risk. The state is right to be strict about that.

Defendants

Defendants have rights too. Public identification can be lawful and appropriate in open court, but bulk data aggregation can create long-tail harm (employment, housing, vigilantism), particularly where cases end in acquittal or discontinuance. This is why minimisation and careful retention rules matter.

The public

The public interest in open justice is not abstract. It includes the ability to scrutinise how domestic abuse is treated, how repeat offenders are sentenced, how grooming cases are prosecuted, and whether systemic failures are being ignored.

The debate is often framed as “privacy vs transparency.” A better framing is: “privacy and transparency with engineering-grade governance.”

11) A practical blueprint for a lawful court data ecosystem

If we want open justice that survives the AI era, we need to stop improvising and start designing. Here is a blueprint that would satisfy most of the legitimate concerns on all sides:

  1. Define a canonical “public listing dataset” with minimised fields (no addresses; no full DoB; protect victims by default where appropriate).
  2. Publish in a consistent, machine-readable format so that “discoverability” is not dependent on private scraping or informal relationships.
  3. Implement a press and research licence with tiered access, clear contractual controls, audit logs, and enforcement.
  4. Create a secure research environment (think “data safe haven”) where higher-sensitivity data can be used for public-interest research under supervision.
  5. Mandate DPIAs for any new processing at scale, including any AI model training or automated analytics.
  6. Independent oversight: an external advisory panel including press, victims’ advocates, privacy experts and court users.

If you work in legal ops, you’ll recognise this: it is the same control architecture we use for health data, financial data, and regulated client data. The justice system deserves no less.

12) What you can do if you care about this

  • Read the parliamentary record and compare the stated rationale with the real-world impact: Commons Hansard (10 Feb 2026) and Lords Hansard (11 Feb 2026).
  • Track journalist-body positions (NUJ is a good start): NUJ statement.
  • Ask the right question of policymakers: “What is the new lawful access model — and who is responsible for ensuring discoverability in practice?”
  • Watch for licensing/market engagement notices and consultation opportunities. (Legal commentary sites often summarise these quickly.)
  • If you are a court user or practitioner, keep records. Transparency is partly built from bottom-up documentation — hearing notices, listings, orders, reasons, and procedural history.

Because here is the punchline: if the system cannot be seen, it cannot be improved. And if it cannot be improved, it cannot be trusted.

Sources and further reading

Regulatory & Editorial Notice (JSH Law Ltd)

This article is published for general information and public-interest commentary only. It does not constitute legal advice and should not be relied upon as such. JSH Law Ltd is not a firm of solicitors and does not provide regulated legal services. If you require legal advice, you should consult a suitably qualified and regulated legal professional.

Where this article refers to third-party reporting, parliamentary materials, organisations, or public cases, it does so for journalistic, educational, and research purposes. External links are provided for reader convenience; JSH Law Ltd is not responsible for the content of external sites.

© JSH Law Ltd | Company No. 16870438 | Manchester (UK) & Kansas (USA)

The Use of AI in Preparing Court Documents: Why the Civil Justice Council Consultation Matters

The Civil Justice Council has launched an eight-week consultation examining whether new rules are needed to regulate the use of artificial intelligence in preparing court documents. Chaired by Lord Justice Birss, the Working Group is considering whether safeguards or formal declarations should apply when legal representatives use AI to draft pleadings, witness statements and expert reports. The consultation recognises both the efficiency benefits of AI and the risks of hallucinated case citations, fabricated authorities and evidential integrity concerns. Particular focus is placed on witness statements and expert evidence, where authenticity is central to the administration of justice. The consultation closes on 14 April 2026. This article explains what is being proposed, why it matters for litigants in person and legal professionals, and how responsible AI use can strengthen — rather than undermine — credibility in court proceedings. PDF here.

The Use of AI in Preparing Court Documents: Why the Civil Justice Council Consultation Matters

Category: AI & Law / Procedural Updates  |  Audience: Litigants in Person & Legal Professionals (England & Wales)

Key takeaways for litigants in person

  • The Civil Justice Council (CJC) is consulting on whether rules should govern the use of AI in preparing court documents.
  • The consultation closes on 14 April 2026.
  • Proposals include possible declarations where AI has been used to generate substantive content.
  • Administrative uses (spell-check, transcription, formatting) are unlikely to require disclosure.
  • Witness statements and expert reports are likely to face stricter safeguards.

What Is This Consultation About?

The Civil Justice Council (CJC) has published an Interim Report and opened an eight-week consultation examining whether procedural rules are needed to regulate the use of artificial intelligence in preparing court documents.

The Working Group is chaired by Lord Justice Birss and includes members of the judiciary, the Bar Council, the Law Society and academic representatives.

The core question is simple but significant:

Should formal rules govern how legal representatives use AI when preparing pleadings, witness statements, skeleton arguments and expert reports?

The consultation paper explains that AI has enormous potential benefits — but also significant risks, particularly around hallucinated case citations, fabricated material and evidential integrity.

Why This Matters

AI is already being used across the legal sector for:

  • Legal research
  • Drafting pleadings
  • Preparing skeleton arguments
  • Summarising disclosure
  • Drafting witness statements
  • Generating expert reports

The consultation recognises that while AI improves efficiency and access to justice, it also introduces risks including:

  • Hallucinated case citations
  • Invented legal authorities
  • Embedded bias in generated content
  • Deepfake or manipulated evidence
  • Hidden metadata (“white text”) manipulation

The administration of justice depends on reliability. If courts cannot trust documents filed before them, confidence in the system erodes.

What the Working Group Proposes

The consultation distinguishes between:

  • Administrative uses (spell-check, formatting, transcription, accessibility tools)
  • Substantive generative uses (AI drafting legal argument, evidence, or expert analysis)

The Working Group’s emerging position suggests:

  • No additional rule required for statements of case or skeleton arguments, provided a legal professional takes responsibility.
  • Stricter controls for witness statements, particularly trial statements.
  • Possible declarations confirming AI has not generated witness evidence.
  • Amendments to expert report statements of truth to require disclosure of AI use.

Witness Statements: The Most Sensitive Area

The report strongly indicates that generative AI should not be used to create or alter substantive witness evidence.

The concern is straightforward:

  • Witness statements must be in the witness’s own words.
  • AI “improving” phrasing may alter tone, emphasis or meaning.
  • Courts rely heavily on authenticity.

The Working Group proposes a declaration that AI has not been used to generate, embellish or rephrase evidence in trial witness statements.

That is significant. It signals that evidential integrity is where regulation will likely concentrate.

Expert Reports: Transparency Rather Than Prohibition

Unlike witness statements, expert reports may legitimately use AI tools for:

  • Data analysis
  • Document extraction
  • Technical modelling

However, the consultation proposes that experts should disclose and explain any AI use beyond administrative functions.

The aim is transparency — not prohibition.

What About Litigants in Person?

Notably, this consultation does not focus on regulating litigants in person.

The paper recognises that many unrepresented parties may rely on AI as their only accessible form of legal assistance.

That presents a policy tension:

  • AI can improve access to justice.
  • But AI can generate inaccuracies.
  • Litigants may lack the expertise to verify output.

Any regulation must therefore balance fairness with accessibility.

Should There Be Mandatory AI Declarations?

International approaches vary. Some US courts require certification of AI use. Others do not.

The Working Group is cautious. It recognises that:

  • AI is rapidly integrating into legal software.
  • It may soon be impossible to distinguish “AI use”.
  • Over-regulation may increase delay and satellite litigation.

The likely direction appears to be:

  • No blanket declaration for routine drafting.
  • Targeted safeguards for evidence.
  • Clear professional responsibility.

Why This Consultation Is Forward-Looking

AI is not going away. The question is not whether it will be used — but how responsibly.

The consultation reflects a mature approach:

  • Encourage innovation.
  • Protect evidential integrity.
  • Preserve public confidence.
  • Avoid stifling access to justice.

That balance is critical.

How to Respond to the Consultation

The consultation closes on 14 April 2026.

Responses can be submitted by completing the consultation cover sheet and sending it to:

CJC.AI.consultation@judiciary.uk

Questions about the process can be directed to:

CJC@judiciary.uk

Responses may be submitted in Word or PDF format.

What This Means Practically

If you are preparing court documents using AI:

  • Verify all case citations manually.
  • Check statutory references independently.
  • Do not use AI to generate witness evidence.
  • Retain responsibility for every word filed.

AI is a tool. It is not a shield.

A Realistic Perspective

Used responsibly, AI enhances efficiency. Used carelessly, it damages credibility.

The Civil Justice Council is not proposing a ban. It is seeking proportionate governance.

That distinction matters.


Book a 15-minute consultation (phone)

If you are navigating litigation and considering using AI tools, or if you are concerned about AI-generated material in your case, you can book a 15-minute consultation below:

Technology should strengthen your case — not undermine it.


Regulatory & Editorial Notice

This article provides general commentary only and does not constitute legal advice. JSH Law provides litigation support services to litigants in person and does not conduct reserved legal activities. References to consultation materials are for informational purposes only.

You can download the pdf here : Interim-Report-and-Consultation-Use-of-AI-for-Preparing-Court-Documents-2.pdf

Access to Justice Will Not Improve Until Litigants in Person Are Treated as First-Class Legal Tech Users

Access to Justice Will Not Improve Until Litigants in Person Are Treated as First-Class Legal Tech Users

Why courts, regulators, and legal-tech designers must stop building only for lawyers

“Access to justice” is one of the most repeated phrases in modern legal reform — and one of the least honestly examined in day-to-day court reality.

Across England and Wales, litigants in person (LiPs) now make up a significant proportion of users in family proceedings, civil disputes, tribunals and administrative processes. Yet much of the system — and much of legal tech — still assumes that a lawyer is the default user, and the unrepresented party is the exception.

They are not.

LiPs are a structural feature of the justice landscape. Until courts, regulators, and legal-tech providers explicitly recognise LiPs as first-class stakeholders, “access to justice” will remain aspirational rather than operational.

Key takeaways

  • Litigants in person are not marginal — they are central to how courts now function.
  • Legal tech designed only for lawyers often creates disadvantage for LiPs.
  • Courts can reduce chaos by setting clearer procedural standards and roadmaps.
  • Regulators can unlock innovation by clarifying the line between navigation support and legal advice.
  • Human-centred tools can improve compliance, fairness and efficiency without replacing lawyers.

1. The post-LASPO reality: LiPs are the system, not a problem within it

In a post-LASPO environment, it is common for one or both parties to be unrepresented. That reality increases pressure on judges, listing, court staff, and the opposing party (who may be represented). It also increases the risk of:

  • missed deadlines and procedural missteps
  • overlong or irrelevant bundles
  • adjournments and delay
  • hearings spent explaining process rather than determining issues
  • avoidable unfairness

These are not personal failings. They are predictable outcomes when systems are built around assumptions that no longer match real users.

2. Why most legal-tech tools fail litigants in person

Many tools that work well for professionals become actively unhelpful when applied to LiPs without redesign. Legal platforms typically assume users can:

  • interpret procedural stages and sequencing
  • identify which evidence is relevant (and why)
  • understand directions, service rules, and deadlines
  • use legal terminology accurately
  • separate emotion from issues and evidence

LiPs often cannot do those things consistently — not because they lack intelligence, but because the system is not taught, and the learning curve is steep under stress.

What this looks like in practice

When LiPs are unsupported, courts see repeat patterns: missed deadlines, misfiled documents, sprawling narratives, under-evidenced allegations, and confusion about what the court is deciding at each stage. These patterns are not random — they are design signals.

3. What courts must do: procedural clarity (not paternalism)

Courts are not powerless. A high-LiP environment requires courts to treat process design as part of justice delivery.

At minimum, courts should publish LiP-aware standards that clearly define:

  • core document types (e.g., chronology, statement, position statement, schedule of allegations/concerns where relevant)
  • what is needed at each stage (first hearing, directions, fact-finding, final hearing)
  • proportionality expectations for evidence and bundles
  • how to comply with directions and what happens if parties do not

Judges often explain process in court. The problem is inconsistency, stress, and the lack of a repeatable structure. Written roadmaps and standardised expectations reduce friction for everyone.

4. The regulator’s role: legitimising navigation tools without fear

One of the biggest barriers to LiP-focused legal tech is regulatory uncertainty. Developers and support services are often risk-averse because they fear crossing into “legal advice”.

Regulators can unlock responsible innovation by drawing a clearer line between:

  • procedural navigation (what the process is, what documents are, how to organise information, how to comply with directions), and
  • legal advice (what someone should do legally, the merits of their case, or how the court is likely to decide).

Navigation support vs legal advice (simple framework)

Usually safe procedural supportUsually crosses into legal advice
Process Explaining stages (e.g., directions → fact-finding → final hearing)
Compliance Helping track deadlines and service requirements
Organisation Structuring a chronology, index, exhibits, bundle sections
Plain English Translating court orders into clear tasks
Merits Advising whether someone should apply/oppose
Strategy Recommending what to plead or concede
Outcomes Predicting likely judicial findings/results
Representation Acting as if solicitor-client duties exist

5. What “LiP-first” legal tech actually looks like

LiP-centred legal tech does not have to be “AI giving legal advice”. The biggest gains come from tools that help people:

  • understand where they are in the process
  • know what is expected next
  • organise information coherently
  • comply with directions and deadlines
  • present evidence in proportionate, readable form

Simple flow diagram: How LiP-first tools reduce friction

Courts publish clear standardsDocument types, stage-by-stage roadmaps, proportionality, bundle structure.

Regulators clarify boundariesNavigation/compliance tools are legitimised; “legal advice” line is explicit.

Legal tech designs to the standardGuided workflows: timelines, bundles, checklists, deadlines, plain-English orders.

LiPs comply more easilyBetter documents, fewer adjournments, clearer issues, fairer hearings.

This is not about replacing lawyers. It’s about reducing avoidable failure points and making procedure intelligible.

6. Why co-design matters: building with, not for, litigants

The most credible way to improve tools for LiPs is co-design: courts, regulators, practitioners, support services, and litigants all informing the build. Without LiPs at the table, products will keep optimising for the wrong user — and courts will keep absorbing the cost.

7. The cost of doing nothing

When systems ignore their dominant user group, the impact is predictable:

  • longer hearings and heavier judicial case management
  • more procedural unfairness and inconsistent outcomes
  • greater emotional and financial harm (especially in family cases)
  • higher public cost through delay and repeat applications

LiP-first design is not only a fairness issue — it is a system efficiency issue.

8. A realistic path forward

Access to justice improves when:

  1. Courts set clear procedural standards and publish roadmaps designed for LiP reality.
  2. Regulators legitimise navigation and compliance tools, and make boundaries explicit.
  3. Legal-tech teams design for human understanding, not just professional efficiency.
  4. LiPs are treated as stakeholders in system design, not problems to be managed.

Call to action

If you are a litigant in person struggling with process — or you work in legal tech, policy, or court-facing innovation — this is a space where practical collaboration matters.

JSH Law works at the intersection of family justice, legal process, and responsible AI-assisted navigation, with a focus on making systems intelligible for real people (not just professionals).

  • Need help structuring a chronology, bundle, or evidence set?
  • Building LiP-centred tools and want practitioner input?
  • Want a repeatable workflow that improves compliance and reduces stress?

Get in touch via the contact page

Regulatory & Editorial Notice (JSH Law)
This article is published for general information and public legal education. It is not legal advice and should not be relied upon as such. Laws, procedural rules, guidance and practice may change. Where this article refers to third-party materials, organisations, or public-interest issues, those references are informational and do not imply endorsement. If you need advice on your specific circumstances, you should obtain independent legal advice from a regulated professional or appropriate support service.

Permission Refused? Using AI to decide what to do next — and when to stop

Judicial Review & AI – Part 8 (Final)


Introduction: the hardest moment in Judicial Review

For many litigants in person, this is the moment that hurts the most.

You have:

  • identified a procedural failure,
  • organised your evidence,
  • complied with the Pre-Action Protocol,
  • issued proceedings,
  • met deadlines,
  • followed the rules.

And then the letter arrives.

Permission refused.

Often with:

  • short reasons,
  • no hearing,
  • and no sense of closure.

At this point, the most important skill is judgment — not persistence.

This final article explains:

  • what a refusal of permission actually means,
  • what realistic options exist next,
  • how AI can help you make rational decisions, not emotional ones,
  • and how to recognise when stopping is the strongest legal move.

What a refusal of permission really means (legally)

At the permission stage, the Administrative Court is not saying:

“You are wrong.”

It is saying:

“This is not a case the High Court should hear.”

That distinction matters.

Permission may be refused because:

  • the claim is not arguable,
  • an alternative remedy exists,
  • the issue is not suitable for Judicial Review,
  • delay is fatal,
  • the grounds are merits-based,
  • or the case is disproportionate.

Some refusals are about substance.
Many are about jurisdiction and restraint.

Understanding which matters.


The court’s institutional position on stopping JR claims

The High Court is deeply conscious of:

  • finality,
  • judicial economy,
  • and the danger of endless litigation.

This is why:

  • permission is filtered on the papers,
  • oral renewals are tightly controlled,
  • repeated applications are discouraged.

Judicial Review is not designed to be:

  • iterative,
  • escalatory,
  • or relentless.

It is designed to be exceptional.


The three lawful options after permission is refused

After refusal, litigants in person usually face three choices:

  1. Seek an oral renewal
  2. Reframe or abandon the JR
  3. Stop — and redirect energy elsewhere

AI can help you evaluate each — but cannot make the decision for you.


Option 1: Oral renewal — when is it justified?

You may request an oral renewal hearing if permission is refused on the papers.

This is not a second bite at the cherry in the ordinary sense.

The court will only engage if:

  • there is a clear error in the refusal reasoning,
  • something material was misunderstood,
  • or the issue was not adequately addressed on the papers.

Oral renewals are not an opportunity to:

  • restate arguments,
  • add new evidence (without permission),
  • re-argue the merits.

How AI helps evaluate oral renewal prospects

AI can assist by:

  • analysing the refusal reasons,
  • comparing them to your grounds,
  • identifying whether the judge addressed the correct issue,
  • flagging whether the refusal turns on:
    • jurisdiction,
    • alternative remedy,
    • or merits drift.

If the refusal is:

  • clearly jurisdictional,
  • clearly about suitability,
  • or clearly about restraint,

an oral renewal is usually not worth pursuing.

AI helps remove hope-based decision-making.


Option 2: Reframing — when JR was the wrong tool

Sometimes permission is refused because:

  • the legal issue exists,
  • but Judicial Review was the wrong vehicle.

Common examples:

  • the issue belongs in an appeal,
  • a complaint route exists,
  • another statutory remedy is available,
  • the problem is systemic but non-justiciable.

This does not mean:

  • you imagined the problem,
  • or the process was flawless.

It means the High Court is not the forum.


How AI helps here

AI can help you:

  • map refusal reasons against alternative routes,
  • identify whether:
    • an appeal can still be pursued,
    • a renewed application is possible,
    • or a non-litigious route exists.

This is strategic redirection, not surrender.


Option 3: Stopping — why this is often the strongest move

Stopping is not failure.

In fact, one of the marks of legal maturity is knowing when a remedy is exhausted.

Continuing after:

  • a clear jurisdictional refusal,
  • no procedural error in the refusal,
  • and no viable alternative framing

often leads to:

  • wasted resources,
  • escalating stress,
  • and reputational damage.

Courts do notice persistence without discipline.


The ethical dimension: AI should reduce harm, not fuel obsession

This is where Law + AI intersects with ethics.

AI can:

  • generate arguments endlessly,
  • suggest variations,
  • keep litigation alive indefinitely.

That does not mean it should.

Responsible AI use means:

  • stopping when law stops,
  • resisting sunk-cost fallacy,
  • recognising diminishing returns.

You are still responsible for decisions.

AI should support clarity, not compulsion.


Common emotional traps after permission refusal

Litigants in person often fall into predictable patterns:

  • “The judge didn’t understand — I just need to explain again.”
  • “If I phrase it differently, it will work.”
  • “Someone must eventually listen.”

These reactions are human — but legally dangerous.

Judicial Review is not persuasion-by-volume.

AI is most valuable when it interrupts emotional escalation, not amplifies it.


Using AI to perform a “JR exit review”

One of the most powerful uses of AI at this stage is a structured exit review.

Questions AI can help you answer:

  • What exactly was refused?
  • On what basis?
  • Is there any legal error in the refusal itself?
  • Is an oral renewal proportionate?
  • What alternative routes exist?
  • What are the costs (financial and emotional) of continuing?

This turns a painful moment into a controlled conclusion.


The reputational aspect litigants rarely consider

Courts are institutional actors.

Repeated:

  • unmeritorious renewals,
  • disproportionate applications,
  • or refusal to accept finality

can affect how future applications are perceived.

Stopping at the right moment preserves:

  • credibility,
  • energy,
  • and future options.

AI can help you see this before damage occurs.


The role of court administration after refusal

Once permission is refused, court interaction typically returns to:

  • administrative closure,
  • compliance with directions,
  • and finality processes operated under HMCTS.

At this stage, clarity matters more than persistence.


What success looks like at the end of a JR journey

Success is not always:

  • permission granted,
  • or a quashing order.

Sometimes success is:

  • forcing a decision via the PAP stage,
  • clarifying the legal position,
  • stopping an unlawful delay,
  • or confirming that JR is not the route.

That knowledge is not wasted.

It is hard-earned legal clarity.


Key Takeaways (for litigants in person)

  • Permission refusal is a jurisdictional decision, not a moral judgment.
  • Oral renewals are narrow and rarely succeed.
  • Reframing is sometimes appropriate; repeating usually is not.
  • Stopping at the right time is a mark of legal strength.
  • AI should be used to:
    • evaluate realistically,
    • reduce emotional escalation,
    • and support principled decisions.
  • Endless litigation is not access to justice.

Judicial Review is exceptional — and knowing when it ends is part of using it lawfully.


Closing the series: what this resource is for

This eight-part series was designed to:

  • demystify Judicial Review,
  • protect litigants in person from procedural harm,
  • show how AI can be used responsibly and ethically,
  • and restore control in situations that often feel powerless.

AI does not replace law.
Law does not bend to persistence.
But clarity — properly supported — restores agency.


Call to Action

If you are:

  • facing a permission refusal,
  • unsure whether to pursue an oral renewal,
  • or need help deciding whether to stop,

You may wish to seek structured, realistic support before taking any further step.


Regulatory & Editorial Notice (JSH Law)

This article is provided for general information only and does not constitute legal advice.

Judicial Review is discretionary, time-limited, and subject to strict procedural controls.
Permission refusal often represents the lawful end of the process.

Readers should seek independent legal advice where appropriate before pursuing further litigation.

Managing Deadlines, Bundles, and Compliance with AI – Procedural discipline in Judicial Review (where cases are really lost)

Judicial Review & AI – Part 7


Introduction: most Judicial Review cases fail quietly

When Judicial Review claims fail, it is rarely dramatic.

There is no cross-examination.
No damning judgment.
No public vindication or condemnation.

Instead, the claim simply:

  • times out,
  • breaches a rule,
  • fails to comply with a direction,
  • or collapses under procedural non-compliance.

For litigants in person, this is often devastating — not because the issue lacked merit, but because process defeated substance.

This article explains:

  • why procedural discipline is critical in Judicial Review,
  • how deadlines and compliance operate in practice,
  • how AI can be used to prevent procedural failure,
  • and how to avoid the common traps that quietly end claims.

Judicial Review is procedural law, not just public law

Judicial Review sits at the intersection of:

  • public law principles, and
  • strict civil procedure.

It is governed by:

  • CPR Part 54,
  • the Administrative Court Practice Directions,
  • and specific court directions once proceedings are issued.

The High Court expects near-perfect compliance.

Latitude for litigants in person exists — but it is limited.

Courts will not:

  • extend time automatically,
  • rewrite non-compliant documents,
  • excuse repeated procedural failures.

This is why AI, used properly, can be invaluable — not as a strategist, but as a discipline enforcer.


The three procedural pressure points in Judicial Review

Judicial Review claims typically fail at one of three procedural stages:

  1. Time limits
  2. Bundles
  3. Compliance with directions

Each is unforgiving.
Each is manageable — with the right systems.


1. Time limits: the guillotine that does not move

Judicial Review claims must be brought:

  • promptly, and
  • in any event within three months of the decision or failure challenged.

This is not flexible.

Even a strong claim can be refused solely for delay.

Courts repeatedly emphasise this because:

  • delay undermines legal certainty,
  • public bodies must be able to rely on decisions.

Litigants in person often underestimate how quickly time runs — especially where silence or inaction is involved.


Where AI helps with time limits

AI can assist by:

  • calculating elapsed time from key dates,
  • flagging approaching deadlines,
  • distinguishing between:
    • continuing failures, and
    • single decisions with ongoing effects.

However, AI cannot decide when time starts to run.

You must determine:

  • the operative date,
  • whether there is a continuing duty,
  • whether delay is justifiable.

AI helps you see — it does not excuse lateness.


2. Bundles: why presentation equals credibility

Judicial Review is decided largely on the papers.

Judges expect:

  • clean,
  • paginated,
  • indexed bundles,
  • with only relevant material included.

A poor bundle signals:

  • lack of focus,
  • lack of seriousness,
  • lack of procedural understanding.

This affects outcomes — even subconsciously.


What courts expect from JR bundles

A compliant bundle typically includes:

  • the claim form,
  • statement of facts and grounds,
  • evidence (exhibits),
  • relevant correspondence,
  • any court directions.

It must be:

  • logically ordered,
  • consistently paginated,
  • clearly indexed.

Courts will not tolerate:

  • sprawling appendices,
  • duplicated documents,
  • emotional exhibits,
  • unexplained screenshots.

How AI helps with bundles (and where it must stop)

AI is excellent at:

  • ordering documents,
  • checking pagination consistency,
  • generating draft indices,
  • identifying duplicates.

AI must not:

  • decide what is legally relevant,
  • exclude documents without review,
  • alter originals.

Think of AI as your bundle manager, not your legal editor.


3. Compliance with directions: the silent killer

Once proceedings are issued, the court will issue directions.

These may include:

  • deadlines for acknowledgements of service,
  • limits on evidence,
  • formatting requirements,
  • page limits.

Failure to comply is taken seriously.

Courts expect:

  • directions to be read carefully,
  • complied with precisely,
  • or varied formally if impossible.

“I didn’t understand” is rarely enough.


Where AI adds value here

AI can:

  • summarise court directions,
  • convert them into task lists,
  • flag inconsistencies,
  • track compliance status.

This is one of the safest and most valuable uses of AI.

What AI must not do:

  • interpret directions creatively,
  • assume flexibility,
  • replace careful reading.

The role of court administration and compliance reality

Judicial Review cases often involve interaction with court systems operated under HMCTS.

This adds complexity:

  • electronic filing systems,
  • automated acknowledgements,
  • varying administrative practices.

AI can help track:

  • what has been submitted,
  • what has been acknowledged,
  • what remains outstanding.

But responsibility remains yours.


Common procedural failures litigants in person make

Judicial Review claims often fail because:

  • documents are filed late,
  • bundles exceed page limits,
  • directions are misunderstood,
  • amendments are made without permission,
  • informal correspondence replaces formal steps.

These failures are rarely cured.

AI helps by enforcing checklists, not by improvising.


Procedural discipline vs flexibility: the court’s view

Courts balance:

  • access to justice,
  • against fairness to public bodies,
  • and efficient use of court resources.

Litigants in person are not expected to be perfect — but they are expected to be organised and serious.

Repeated non-compliance erodes goodwill rapidly.

AI, used properly, helps demonstrate:

  • respect for the process,
  • reliability,
  • proportionality.

Using AI as a procedural “second pair of eyes”

One of the best uses of AI is review, not drafting.

Examples:

  • “Have I complied with every direction?”
  • “Are there any inconsistencies in dates or pagination?”
  • “Is anything missing that the court expects?”

AI excels at spotting patterns and omissions.

It should be used before, not after, filing.


What AI must never be used to do procedurally

AI must not:

  • decide to ignore directions,
  • guess court expectations,
  • file documents autonomously,
  • substitute legal judgment.

Courts expect human responsibility.

AI is invisible to them — your compliance is not.


Key Takeaways (for litigants in person)

  • Judicial Review claims often fail on procedure, not law.
  • Time limits are unforgiving.
  • Bundles signal credibility.
  • Directions must be complied with precisely.
  • AI is most useful as a:
    • deadline tracker,
    • bundle organiser,
    • compliance checker.
  • AI does not excuse lateness or non-compliance.

Procedural discipline is not optional — it is the case.


Preparing for the final stage

After permission decisions, litigants face:

  • permission refusal,
  • conditional grants,
  • or limited permission.

The final article in this series addresses:

  • how to respond rationally,
  • how to assess next steps,
  • and how AI can help avoid throwing good money after bad.

Call to Action

If you are:

  • struggling to manage Judicial Review deadlines,
  • concerned about bundle compliance,
  • or unsure how to interpret court directions,

You may wish to seek structured support before procedural errors become irreversible.


Regulatory & Editorial Notice (JSH Law)

This article is provided for general information only and does not constitute legal advice.

Judicial Review proceedings are governed by strict procedural rules and judicial discretion.
Failure to comply with time limits, directions, or bundle requirements may result in refusal of permission or dismissal of the claim.

Readers should obtain independent legal advice where appropriate.

From Pre-Action Protocol to Permission – Structuring Judicial Review grounds with AI — and avoiding merits traps

Judicial Review & AI – Part 6


Introduction: permission is the real battlefield

Most Judicial Review claims never reach a full hearing.

They fail — quietly and decisively — at the permission stage.

For litigants in person, this can feel bewildering. Everything may feel unfair. The process may have stalled. Appeals may have been ignored. And yet the court refuses permission in a few short paragraphs.

The reason is usually not lack of injustice.

It is poor framing.

This article explains:

  • what the permission stage is actually testing,
  • how Judicial Review grounds must be structured,
  • why merits-based arguments are fatal,
  • and how AI can help enforce discipline, not inflate claims.

What the permission stage is for (in reality)

Under CPR Part 54, the Administrative Court must decide whether a claim is:

  1. Arguable, and
  2. Suitable for Judicial Review.

This is not a mini-trial.
It is a filtering exercise.

Judges are asking:

  • Is this a genuine public-law issue?
  • Is there an alternative remedy?
  • Is the claim focused and lawful?
  • Is it proportionate for the High Court?

If the answer to any of these is “no”, permission is refused.


Why litigants in person struggle most at this stage

Litigants in person often:

  • understand the facts deeply,
  • experience the injustice personally,
  • know exactly what feels wrong.

But Judicial Review does not operate on feelings.

It operates on:

  • duties,
  • legality,
  • jurisdiction,
  • restraint.

The hardest shift is moving from:

“This decision was wrong”
to
“This decision-making process was unlawful.”

AI can help enforce that shift — if used correctly.


The structure of Judicial Review grounds (what the court expects)

Judicial Review grounds are not free-form.

They are expected to follow a disciplined structure:

  1. The decision (or failure) challenged
  2. The legal duty or power
  3. The public-law ground
  4. How the duty was breached
  5. Why Judicial Review is appropriate
  6. The remedy sought

If any of these are missing or muddled, permission is at risk.


Ground 1: identifying the correct target

Your grounds must clearly identify:

  • what is being challenged,
  • when it occurred,
  • who is responsible.

This may be:

  • a refusal,
  • a failure to determine,
  • a procedural decision,
  • or a constructive refusal.

Vague formulations (“the court has ignored me”) almost always fail.

AI can assist by:

  • enforcing specificity,
  • flagging ambiguity,
  • aligning grounds with your chronology.

Ground 2: identifying the legal duty

This is where many claims collapse.

Judicial Review requires:

  • a legal duty,
  • not just a power,
  • and not just an expectation.

The question is:

Was the public body required by law to act — and did it fail to do so lawfully?

Without a duty, there is no unlawfulness.

AI can help:

  • check whether you are assuming a duty,
  • flag where a duty needs to be evidenced,
  • prevent overstatement.

But you must verify the law.


Ground 3: choosing the correct public-law ground

Most JR claims rely on one (sometimes two) grounds:

Illegality

The decision-maker:

  • misunderstood the law,
  • failed to exercise a required power,
  • or acted outside jurisdiction.

Procedural unfairness

The process was unfair because:

  • no reasons were given where required,
  • no opportunity to be heard was provided,
  • mandatory procedure was not followed.

Irrationality

A very high threshold — rarely appropriate for litigants in person.

AI can help prevent the common mistake of:

  • pleading all grounds “just in case”.

Courts view that as lack of focus.


The single biggest mistake: merits drift

Merits drift occurs when:

  • arguments about fairness,
  • disagreement with reasoning,
  • or dissatisfaction with outcomes

creep into what should be a process challenge.

Examples of merits drift:

  • arguing evidence should have been weighed differently,
  • asserting bias without procedural basis,
  • challenging findings of fact.

These are appeal issues — not Judicial Review issues.

AI is particularly useful here:

  • it can flag evaluative language,
  • identify opinion-based phrasing,
  • and force re-framing into procedural terms.

Keeping law and fact separate (critical discipline)

Judicial Review requires:

  • facts to be stated neutrally,
  • law to be applied to those facts,
  • not blended together.

A common error is embedding argument into factual narrative.

AI can help by:

  • separating factual chronology from legal analysis,
  • highlighting where language crosses the line,
  • enforcing neutral drafting.

This separation builds judicial trust.


Alternative remedy: the silent killer of JR claims

Even where unlawfulness exists, Judicial Review may still fail if:

  • an appeal route exists,
  • or another adequate remedy is available.

Courts are firm on this.

You must:

  • identify the appeal route,
  • explain whether it exists in reality,
  • and justify why JR is still appropriate.

This is where litigants in person often underestimate the burden.

AI can help:

  • structure this justification,
  • but cannot invent a lack of remedy where one exists.

Remedy: what you can (and cannot) ask for

Judicial Review remedies are limited.

You may ask for:

  • a decision to be quashed,
  • a matter to be reconsidered lawfully,
  • a duty to be performed.

You cannot ask the High Court to:

  • decide the underlying appeal,
  • substitute its own view of the facts,
  • grant compensation (save in rare cases).

AI can help test whether the remedy sought aligns with JR principles.


How AI should be used at the permission stage

AI is best used as a quality-control tool, not a generator.

Proper uses include:

  • checking internal consistency,
  • identifying merits drift,
  • ensuring each ground maps to evidence,
  • testing whether each ground answers the “so what?” question.

AI should not:

  • expand arguments,
  • multiply grounds,
  • add speculative claims,
  • generate case law without verification.

Permission-stage discipline is about less, not more.


The court’s perspective: what judges scan for first

Judges reviewing permission applications often:

  • skim first,
  • assess focus,
  • test plausibility quickly.

They are alert to:

  • scattergun pleading,
  • emotional language,
  • disproportionate claims.

A tight, restrained set of grounds signals seriousness.


Key Takeaways (for litigants in person)

  • The permission stage is the real test in Judicial Review.
  • Grounds must challenge lawfulness, not outcomes.
  • Identify a legal duty — or the claim fails.
  • Merits drift is the most common fatal error.
  • AI is most useful as a:
    • discipline tool,
    • clarity enforcer,
    • consistency checker.
  • Fewer, stronger grounds beat many weak ones.

If you cannot state your grounds in calm, procedural language, Judicial Review is unlikely to succeed.


Preparing for the final stages

If permission is granted, the case moves into:

  • full pleadings,
  • possible disclosure,
  • and substantive hearing.

But many litigants will face:

  • permission refusal,
  • or a conditional grant.

The final article in this series addresses that moment — and how to respond rationally.


Call to Action

If you are:

  • preparing Judicial Review grounds,
  • unsure whether your case has drifted into merits,
  • or worried about permission-stage refusal,

You may wish to seek structured support before issuing proceedings.

Regulatory & Editorial Notice (JSH Law)

This article is provided for general information only and does not constitute legal advice.

Judicial Review claims are subject to strict procedural requirements and judicial discretion.
Improperly framed grounds may result in refusal of permission and adverse costs consequences.

Readers should seek independent legal advice where appropriate.

Drafting a Pre-Action Protocol Letter with AI Support – Applying lawful pressure before Judicial Review proceedings

Judicial Review & AI – Part 5


Introduction: most Judicial Review cases should never be issued

This may sound counterintuitive, but it is true:

A well-drafted Pre-Action Protocol letter is often more powerful than a Judicial Review claim itself.

For litigants in person, the Pre-Action Protocol (PAP) stage is frequently misunderstood. Some see it as a formality. Others treat it as an emotional complaint.

Both approaches are mistakes.

In Judicial Review, the PAP letter is:

  • a legal warning shot,
  • a compliance test,
  • and a credibility filter.

This article explains:

  • what the Pre-Action Protocol is for,
  • what the court expects from it,
  • how AI can assist without undermining trust,
  • and how to draft a PAP letter that actually changes behaviour.

The legal status of the Pre-Action Protocol in Judicial Review

Judicial Review claims are governed by CPR Part 54 and the Judicial Review Pre-Action Protocol.

Compliance is not optional.

Before issuing proceedings, a claimant is expected to:

  • identify the decision or failure challenged,
  • set out the legal basis of the claim,
  • state the remedy sought,
  • give the proposed defendant a reasonable opportunity to respond.

Failure to comply can result in:

  • refusal of permission,
  • adverse costs consequences,
  • or the court questioning the claimant’s credibility.

For litigants in person, courts will allow some latitude — but not a complete absence of discipline.


What the PAP stage is actually testing

The PAP stage tests four things:

  1. Clarity
    Can you identify the public-law issue precisely?
  2. Legality
    Are you challenging lawfulness, not outcomes?
  3. Proportionality
    Are you seeking a realistic remedy?
  4. Seriousness
    Do you understand the gravity of Judicial Review?

AI can help with all four — if used properly.


What a Judicial Review PAP letter is not

A PAP letter is not:

  • a complaint,
  • a witness statement,
  • a narrative of injustice,
  • a threat-filled ultimatum,
  • a re-argument of the merits.

Letters that read like grievances are often ignored — or responded to defensively.

Judicial Review requires cool precision.


The anatomy of an effective JR Pre-Action Protocol letter

A proper PAP letter has a predictable structure. Courts expect it.

1. Identification of the claimant and proposed defendant

This must be precise.

The letter should clearly identify:

  • who is bringing the claim,
  • which public body is responsible,
  • whether the issue lies with:
    • a court,
    • court administration,
    • or systems operating under HMCTS.

AI can help ensure consistency — but you must choose the correct defendant.


2. The decision or failure being challenged

This is the most important section.

You must state:

  • whether you are challenging:
    • a decision,
    • a refusal,
    • or a failure to act,
  • the date (or period) of that decision or failure,
  • how it arose procedurally.

Vague statements like “my appeal has been ignored” are not sufficient.

AI is useful here to:

  • extract precise dates,
  • strip out emotive language,
  • enforce specificity.

3. The factual background (short and neutral)

This section should:

  • summarise the relevant chronology,
  • refer to documents,
  • avoid argument.

It is not the place for case law or submissions.

AI can help condense longer timelines into a tight factual summary — but it must be reviewed carefully for accuracy.


4. The legal basis of the claim

This is where discipline matters.

You must identify:

  • the public-law ground relied upon:
    • illegality,
    • procedural unfairness,
    • irrationality,
  • and the duty said to have been breached.

You do not need to cite every case.

Over-citation is often counterproductive.

AI can help:

  • ensure the correct ground is identified,
  • prevent drift into merits-based argument,
  • maintain a judicial tone.

5. The remedy sought

This must be realistic and lawful.

Common remedies include:

  • determination of an appeal,
  • reconsideration in accordance with law,
  • provision of reasons,
  • ending an unlawful delay.

You are not asking the court to decide the underlying case.

AI can help test whether the remedy aligns with Judicial Review principles.


6. Timeframe for response

The Protocol suggests 14 days in most cases.

Shorter periods may be justified where:

  • delay is ongoing,
  • rights are being prejudiced.

AI can help flag proportionality risks here.


7. Warning of intended proceedings (without aggression)

The letter should state calmly that:

  • Judicial Review proceedings will be issued if the issue is not resolved,
  • subject to the response received.

Threatening language weakens credibility.


Tone: why neutrality wins

Judicial Review correspondence is often read by:

  • government lawyers,
  • legal advisers,
  • senior officials.

They are trained to assess risk.

A neutral, legally framed PAP letter signals:

  • seriousness,
  • competence,
  • procedural awareness.

AI can help remove:

  • emotional phrasing,
  • accusatory language,
  • rhetorical flourishes.

This is one of its greatest strengths.


Common PAP mistakes litigants in person make

Judicial Review PAP letters often fail because they:

  • argue the merits,
  • accuse judges of bias,
  • demand apologies or compensation,
  • include excessive attachments,
  • misstate the legal basis,
  • threaten media exposure.

AI can help identify and strip these out — if you let it.


How AI should be used in PAP drafting (properly)

AI should be used to:

  • structure the letter,
  • ensure completeness,
  • check tone consistency,
  • cross-reference facts to evidence,
  • flag missing elements.

AI should not:

  • invent legal duties,
  • escalate tone,
  • add speculative arguments,
  • generate case law without verification.

The final letter must always be human-approved.


What happens after the PAP letter is sent

Three things usually happen:

  1. The issue is resolved
    The appeal is listed, reasons are given, or delay ends.
  2. A reasoned refusal is issued
    This clarifies whether JR is viable.
  3. No adequate response
    This strengthens the JR claim.

AI can assist in analysing the response — but it cannot decide next steps for you.


Why courts care about PAP compliance

At the permission stage, judges often ask:

  • Was the issue raised properly?
  • Was the public body given a chance to respond?
  • Was litigation proportionate?

A good PAP letter answers these questions before they are asked.

A poor one raises doubts immediately.


Key Takeaways (for litigants in person)

  • The Pre-Action Protocol stage is substantive, not procedural.
  • Most JR cases should resolve here.
  • A PAP letter must challenge lawfulness, not fairness.
  • Tone matters as much as content.
  • AI is most valuable for:
    • structure,
    • neutrality,
    • consistency,
    • error prevention.
  • A strong PAP letter often determines the outcome before court.

If you cannot clearly articulate the public-law failure in a PAP letter, Judicial Review is unlikely to succeed.


Preparing for the next step

If the PAP stage does not resolve matters, the next step is:

  • issuing Judicial Review proceedings,
  • drafting Statement of Facts and Grounds,
  • and preparing for the permission stage.

That process is unforgiving.

AI can help — but only if everything so far has been done properly.


Call to Action

If you are considering Judicial Review and want help:

  • drafting a compliant Pre-Action Protocol letter,
  • ensuring your case is framed correctly,
  • or understanding whether proceedings are proportionate,

You may wish to seek structured support before issuing any claim.


Regulatory & Editorial Notice (JSH Law)

This article is provided for general information only and does not constitute legal advice.

Judicial Review proceedings are governed by strict procedural rules.
Failure to comply with the Pre-Action Protocol may result in refusal of permission or adverse costs consequences.

Readers should obtain independent legal advice where appropriate.

Organising Evidence for Judicial Review with AI – What the Court Expects — and What It Will Not Tolerate

Judicial Review & AI – Part 4


Introduction: evidence is where Judicial Review succeeds or collapses

By the time a Judicial Review claim reaches the court, the law is usually not the problem.

Most claims fail because:

  • evidence is disorganised,
  • assertions are not supported,
  • documents are missing, duplicated, or mislabelled,
  • or the court cannot see — quickly — what matters.

For litigants in person, this stage is often overwhelming. Evidence arrives in dozens (sometimes hundreds) of emails, PDFs, screenshots, portal messages, and letters.

AI can help — dramatically — but only if used with discipline.

This article explains:

  • what evidence the Administrative Court actually expects,
  • how evidence is assessed at the permission stage,
  • how to organise evidence using AI without breaching trust,
  • and the common mistakes that cause otherwise viable claims to fail.

The legal role of evidence in Judicial Review

Judicial Review is decided primarily on:

  • documents, not testimony,
  • procedure, not credibility contests,
  • records, not recollections.

This is reflected in CPR Part 54 and the Practice Directions governing Administrative Court proceedings.

Unlike many other proceedings:

  • witness statements are limited,
  • cross-examination is rare,
  • the court expects evidence to be self-explanatory.

Your evidence bundle must allow the judge to understand the case without detective work.


The permission stage: why evidence clarity matters so much

Most Judicial Review claims fail at the permission stage.

At this point, the judge typically has:

  • limited time,
  • a short bundle,
  • no oral argument.

They are asking:

  1. Is there an arguable public-law case?
  2. Is it properly evidenced?
  3. Is it procedurally clean?

If the evidence is confusing, incomplete, or bloated, permission is often refused — even where issues exist.

AI’s value lies in reducing friction at this stage.


What counts as evidence in Judicial Review

Evidence in Judicial Review usually includes:

  • court orders,
  • appeal notices,
  • acknowledgements,
  • correspondence with the court,
  • procedural emails,
  • automated responses,
  • screenshots of portals,
  • letters before action (if already sent),
  • relevant policy documents (where applicable).

What it does not usually include:

  • opinion,
  • speculation,
  • emotional narrative,
  • extensive witness evidence (unless strictly necessary).

AI must be used to organise, not embellish.


The court’s evidence mindset

The Administrative Court expects evidence to be:

  • Relevant
    Does it prove or disprove a fact that matters?
  • Chronological
    Does it align cleanly with the timeline?
  • Traceable
    Can each assertion be located in a document?
  • Proportionate
    Is unnecessary material excluded?

Courts are particularly alert to over-inclusion, which often signals lack of focus.


Common evidence failures in JR claims (and why they are fatal)

Before looking at AI workflows, it is worth being blunt about recurring problems.

Judicial Review claims often fail because:

  • screenshots are not dated,
  • emails are partial or cropped,
  • documents are duplicated,
  • key letters are missing,
  • evidence is embedded inside narrative statements,
  • bundles are unpaginated or misindexed.

The court will not “piece it together”.

This is not hostility — it is volume and practicality.


Where AI fits into evidence organisation

AI is exceptionally good at:

  • sorting,
  • grouping,
  • deduplicating,
  • indexing,
  • cross-referencing.

It must never:

  • decide relevance for you,
  • remove context without review,
  • alter original documents.

Think of AI as a junior clerk, not a decision-maker.


Step-by-step: organising JR evidence using AI (safely)

Step 1: Evidence ingestion — create a single source of truth

All evidence must be:

  • gathered into one workspace,
  • clearly labelled,
  • preserved in original form.

AI can help detect:

  • duplicates,
  • near-duplicates,
  • inconsistent filenames.

But originals must remain untouched.


Step 2: Categorise evidence by function, not emotion

Evidence should be grouped by role, for example:

  • filing evidence,
  • acknowledgements,
  • responses,
  • non-responses,
  • procedural decisions.

AI can assist by:

  • clustering documents by content,
  • identifying recurring phrases (“acknowledged”, “will be listed”).

This supports clarity — not argument.


Step 3: Anchor every document to the timeline

Each document should be linked to:

  • a specific date,
  • a specific event in the chronology.

AI can cross-check:

  • whether any timeline entry lacks a document,
  • whether any document is unused.

Unused evidence should usually be removed.


Step 4: Identify what the evidence proves

This is subtle but crucial.

Evidence does not exist to tell a story — it exists to prove facts such as:

  • an appeal was lodged,
  • correspondence was sent,
  • no response was received,
  • time elapsed.

AI can help summarise what each document demonstrates — but the summary must be verified.


Step 5: Create an evidence index the court can scan in minutes

A proper JR evidence index includes:

  • exhibit number,
  • date,
  • short neutral description,
  • page reference.

AI excels here:

  • generating draft indices,
  • checking numbering,
  • ensuring consistency.

The final index, however, must be human-approved.


Step 6: Reduce — then reduce again

This is where discipline matters.

Courts prefer:

  • fewer documents,
  • clearly relevant,
  • cleanly indexed.

AI can help flag:

  • repetitive correspondence,
  • documents that add nothing new.

Removing material is often the hardest — and most important — step.


Evidence of silence: how to prove “nothing happened”

Silence is central to many JR claims — and difficult to evidence.

Courts expect:

  • proof of what did happen,
  • followed by demonstrable gaps.

AI helps by:

  • calculating time between events,
  • showing unanswered chasers,
  • mapping inactivity periods.

What you must not do:

  • assert silence without showing the surrounding activity.

Absence must be structurally visible.


Targeting the correct public body through evidence

Evidence should make clear whether:

  • the issue lies with a judge,
  • court administration,
  • listing processes,
  • or systems operated under HMCTS.

This matters because:

  • Judicial Review must be directed at the correct defendant,
  • misidentification leads to refusal.

AI can help trace patterns of response and responsibility.


What judges look for in JR evidence bundles

Judges assessing permission typically ask:

  • Can I see what happened quickly?
  • Are the documents reliable?
  • Is the bundle proportionate?
  • Does the evidence support the alleged failure?

A clean bundle signals seriousness and credibility.

A chaotic one signals risk.


What AI must not be used to do with evidence

AI must not:

  • alter documents,
  • “clean up” screenshots,
  • infer missing content,
  • summarise without verification,
  • replace originals with generated text.

Any hint of document manipulation can destroy trust instantly.


Key Takeaways (for litigants in person)

  • Judicial Review is document-driven.
  • Evidence must be relevant, chronological, and proportionate.
  • Silence is proved through structure, not assertion.
  • AI is best used for:
    • sorting,
    • indexing,
    • consistency checking,
    • gap detection.
  • Every document must earn its place in the bundle.
  • Courts will not fix evidence problems for you.

A strong evidence bundle often determines permission before law is considered.


Preparing for the next stage

Once evidence is organised, you are ready for:

  • formal engagement with the public body,
  • the Pre-Action Protocol stage.

This is where many Judicial Review cases resolve — without issuing proceedings.


Call to Action

If you are:

  • overwhelmed by court correspondence,
  • unsure what evidence matters,
  • or concerned about preparing a JR-ready bundle,

You may wish to seek structured support before taking further steps.


Regulatory & Editorial Notice (JSH Law)

This article is provided for general information only and does not constitute legal advice.

Judicial Review proceedings are governed by strict procedural rules.
Improperly organised evidence may result in refusal of permission or adverse costs consequences.

Readers should seek independent legal advice where appropriate.

Building a Judicial Review Timeline Using AI – Without losing accuracy, credibility, or the court’s trust

Judicial Review & AI – Part 3


Introduction: why timelines decide Judicial Review cases

In Judicial Review, chronology is not background material.

It is the case.

Before the court considers:

  • grounds,
  • unlawfulness,
  • remedies,

it asks a far more basic question:

What actually happened — and when?

For litigants in person, this is often the hardest part. Court processes generate:

  • fragmented emails,
  • automated notices,
  • partial acknowledgements,
  • long silences,
  • overlapping procedures.

AI can help enormously — but only if used with discipline.

This article explains:

  • why timelines are decisive in Judicial Review,
  • what a JR-ready chronology looks like,
  • how to use AI to build one without introducing error,
  • and how courts assess credibility through structure.

Why Judicial Review timelines are different from ordinary case histories

In most litigation, timelines support argument.

In Judicial Review, timelines establish unlawfulness.

They are used to show:

  • a failure to act,
  • an unreasonable delay,
  • a procedural breach,
  • or a decision taken (or avoided) at a specific moment.

The Administrative Court does not tolerate:

  • vagueness,
  • reconstructed guesswork,
  • emotional narrative.

It expects forensic precision.

That expectation applies equally to litigants in person.


The legal role of chronology in Judicial Review

Under CPR Part 54, claimants must file:

  • a Statement of Facts and Grounds, and
  • evidence supporting those facts.

Facts come first.
Law comes second.

Courts repeatedly emphasise that:

  • arguments cannot float free of dates,
  • unlawfulness must be anchored in time,
  • delay must be measurable, not rhetorical.

A Judicial Review without a clear timeline is usually refused at the permission stage.


Common chronology errors that sink JR claims

Before we look at AI, it is important to understand what not to do.

Courts routinely reject claims where:

  • dates are inconsistent,
  • events are out of sequence,
  • filings are assumed rather than proven,
  • silence is alleged without evidence,
  • timelines mix facts with argument.

A chronology is not:

  • a witness statement,
  • a complaint letter,
  • a narrative of injustice.

It is a neutral factual map.


What a JR-ready timeline actually looks like

A proper Judicial Review timeline has five characteristics:

1. Strict chronology

Events are ordered by date, not importance.

2. Documentary anchoring

Every entry can be traced to evidence.

3. Procedural clarity

Each step is linked to a rule, duty, or process.

4. Neutral language

No argument, no emotion, no speculation.

5. Gap visibility

Silence and delay are shown by absence, not assertion.

AI is excellent at supporting these — if controlled correctly.


Where AI adds real value (and where it doesn’t)

AI is most effective before drafting begins.

At this stage, AI is a:

  • sorting engine,
  • pattern detector,
  • consistency checker.

It is not a fact-creator.


Step-by-step: building a Judicial Review timeline using AI

Step 1: Gather everything (before analysis)

Before using AI at all, you must gather:

  • appeal notices,
  • acknowledgements,
  • emails,
  • court orders,
  • automated responses,
  • postal records,
  • screenshots of portals,
  • chasing correspondence.

If it isn’t documented, it doesn’t exist for JR purposes.

AI cannot rescue missing evidence.


Step 2: Convert documents into machine-readable text

AI works best when documents are:

  • OCR-converted,
  • clearly labelled,
  • date-stamped.

At this stage, AI can assist with:

  • extracting dates,
  • identifying senders,
  • detecting references to procedures.

However, you must manually verify every extracted date.

OCR errors are common — and fatal if unchecked.


Step 3: Create a neutral event list (no interpretation)

This is the most important discipline.

Each timeline entry should follow a simple structure:

  • Date
  • Actor (e.g. appellant, court, listing office)
  • Action
  • Document reference

Example (neutral):

12 March 2025 – Appeal lodged by claimant via online portal. Acknowledgement email received same day.

Not:

The court ignored my appeal.

AI can help strip out loaded language and enforce neutrality.


Step 4: Separate facts from legal significance

At this stage, do not label anything as unlawful.

AI can help you create two parallel views:

  • a pure factual chronology, and
  • a working analysis layer (for your eyes only).

Courts must see only the first.

This separation is critical.


Step 5: Identify silence and delay structurally

Silence is not a single event.

It is a gap between events.

AI can help calculate:

  • elapsed time between steps,
  • number of chasers sent,
  • periods of complete inactivity.

This is where patterns emerge — and where many litigants realise:

  • delay is shorter than they thought, or
  • longer — and more serious.

Both outcomes are valuable.


Step 6: Link events to procedural expectations

Once the factual timeline exists, AI can assist you in mapping:

  • procedural rules,
  • expected next steps,
  • legal duties.

For example:

  • Was acknowledgment required?
  • Was listing discretionary?
  • Was a decision required within a reasonable time?

This is analysis — not evidence — and should remain separate.


Step 7: Identify the moment of failure

Judicial Review usually crystallises around a specific point:

  • a refusal,
  • a deadline missed,
  • a failure to respond after repeated engagement.

AI can help test different candidates:

  • Is the claim premature?
  • Has the duty actually arisen yet?
  • Has time started to run?

This prevents issuing JR too early or too late.


Who is the timeline for?

Your JR timeline serves three audiences:

  1. You
    To understand whether you actually have a public-law issue.
  2. The court
    To assess permission quickly and confidently.
  3. The defendant public body
    Particularly during the Pre-Action Protocol stage.

AI helps align all three.


Targeting the correct public authority

A frequent JR failure is naming the wrong defendant.

Your timeline should make clear whether the issue lies with:

  • a judge’s decision,
  • court administration,
  • listing systems,
  • or processes operated under HMCTS.

AI can help detect where actions (or inaction) originate — but you must decide the legal target.


The court’s perspective: what judges look for

When judges review JR chronologies, they ask:

  • Are dates consistent?
  • Are events evidenced?
  • Is delay objectively shown?
  • Is the claim focused or sprawling?

A clean timeline:

  • builds trust,
  • shortens hearings,
  • increases permission prospects.

A messy one undermines credibility immediately.


What AI must not be used to do at this stage

AI must not:

  • infer facts not in evidence,
  • assume reasons for silence,
  • compress time inaccurately,
  • replace human verification.

The fastest way to lose the court’s confidence is to present a timeline that collapses under basic scrutiny.


Key Takeaways (for litigants in person)

  • In Judicial Review, chronology is the case.
  • Timelines must be neutral, evidenced, and precise.
  • Silence is shown through gaps, not complaints.
  • AI is best used as:
    • a sorting tool,
    • a gap detector,
    • a consistency checker.
  • Every date must be manually verified.
  • A strong timeline often reveals whether JR is viable before you issue.

If your timeline does not clearly show what duty arose, when, and how it was breached, Judicial Review will fail.


How this prepares you for the next step

Once a Judicial Review-ready timeline exists, you can:

  • organise evidence properly,
  • prepare a Pre-Action Protocol letter,
  • apply pressure without issuing proceedings.

That is where AI’s organisational strengths really come into play.


Call to Action

If you are struggling to:

  • organise complex court correspondence,
  • identify whether delay is legally significant,
  • or build a clean Judicial Review chronology,

You may wish to seek structured assistance before taking further steps.


Regulatory & Editorial Notice (JSH Law)

This article is provided for general information only and does not constitute legal advice.

Judicial Review is subject to strict procedural rules and time limits.
Chronology errors can be fatal to claims.

Readers should seek independent legal advice where appropriate before issuing proceedings.