This section provides practical tools to support preparation and organisation in family court proceedings. It includes templates, checklists, starter packs, and tracking resources designed for litigants in person.

Content in this category is intended to help users structure their case materials, meet procedural requirements, and reduce errors by using clear, repeatable tools aligned with family court expectations.

Family Court Chronology Templates (UK Guide for Litigants in Person)

In Family Court, clarity often determines credibility. Judges must understand complex histories quickly — patterns of conflict, safeguarding concerns, missed contact, financial movements, and escalation over time. A well-structured chronology transforms scattered documents into a coherent timeline. For litigants in person, mastering chronology drafting is one of the most powerful procedural tools available. This guide explains what a family court chronology is, how it should be structured, the drafting standards expected by the court, and provides practical templates you can use immediately.

Family Court Chronology Templates (UK Guide for Litigants in Person)

Key Takeaways

  • A chronology is not a story — it is a structured, date-ordered record of significant events.
  • Judges rely on chronologies to understand patterns, risk, escalation and context quickly.
  • For court filing, recent events should usually appear first (reverse chronological order).
  • Each entry should contain: Date, Event, and Evidence Reference as a minimum.
  • Chronologies must be factual, concise, and cross-checked against documentary evidence.
  • Different cases require different chronologies: core, issue-based, safeguarding, and financial disclosure.

Introduction: Why Chronologies Matter in Family Court

In Family Court, clarity is power.

Judges read hundreds of pages in limited time. They are required to identify patterns, assess risk, apply statutory tests, and make decisions affecting children and families — often under intense time pressure.

A well-drafted chronology can become the backbone of judicial understanding.

A poorly drafted chronology can undermine credibility, obscure risk, or create confusion.

This guide explains:

  • What a chronology is (and is not)
  • The minimum drafting standards
  • How to structure different types of chronologies
  • Best practice for accuracy and updating
  • Four ready-to-use templates aligned with UK family proceedings

What Is a Family Court Chronology?

A chronology is a succinct, date-ordered record of significant events in a child’s or family’s life. It is an analytical tool — not a narrative statement.

It should:

  • Identify significant dates
  • Describe events factually
  • Cross-reference documentary evidence
  • Enable rapid extraction of key facts
  • Highlight patterns or escalation

It should not:

  • Contain argument
  • Contain emotional commentary
  • Duplicate entire witness statements
  • Include irrelevant minor incidents

Core Drafting Principles

1. Minimum Required Fields

At a minimum, every entry should contain:

  • Date
  • Event Description (concise and factual)
  • Evidence / Bundle Reference

Optional but often useful additions:

  • Issue relevance
  • Impact on child
  • Multi-agency source (police, GP, school, CAFCASS)

2. Ordering

  • For court filing: Most recent events first (reverse chronological order).
  • For running case management: Oldest events first (system chronology).

3. Tone

Use neutral, factual language. For example:

Not: “The father violently attacked me.”
Instead: “Police attended address following alleged assault by father. Crime reference no. XXXX. No charges brought.”

The evidence speaks for itself.


Template 1: Core Chronology (Date / Event / Evidence Reference)

This is the foundational structure suitable for most private law children cases.

Date Event Description Evidence / Bundle Reference Relevance (Optional)
15/03/2023 Police attended family home following reported verbal altercation. Police log ref 12345 (Bundle p.67) Safeguarding concern
01/06/2023 Child commenced counselling at GP referral. GP letter dated 28/05/2023 (Bundle p.112) Emotional impact

Drafting Note: Keep entries short — ideally one to three lines.


Template 2: Issue-Based Chronology

Where proceedings involve multiple disputed themes (e.g., domestic abuse, non-compliance, relocation, schooling), a grouped chronology can improve clarity.

Structure:

Issue 1: Alleged Domestic Abuse

Date Event Evidence Reference
12/02/2022 Alleged pushing incident witnessed by child. Witness Statement para 23; School note p.145

Issue 2: Missed Contact

Date Event Evidence Reference
03/09/2023 Contact did not take place; father texted 30 mins prior cancelling. WhatsApp screenshot p.210

This structure helps the judge see patterns within specific disputes.


Template 3: Safeguarding-Focused Timeline

This is used where there are allegations of domestic abuse, neglect, coercive control or child risk factors.

Date Incident Child Impact Agency Involvement Evidence Ref
10/11/2021 Alleged verbal abuse during exchange. Child tearful; reported fear. School informed next day. Email p.178

This template helps align your chronology with safeguarding frameworks and PD12J considerations.


Template 4: Financial Disclosure Timeline

In financial remedy proceedings, chronology helps identify asset acquisition, disposal, non-disclosure or significant financial decisions.

Date Financial Event Amount / Asset Evidence Ref
04/05/2020 Transfer from joint savings account £18,000 Bank statement p.302

Financial chronologies are particularly useful in contested Form E cases.


Multi-Agency Cross-Referencing

Where appropriate, cross-check chronology entries against:

  • Police logs
  • GP records
  • School reports
  • CAFCASS safeguarding letters
  • Social services assessments

Accuracy builds credibility.


Updating and Maintenance

A chronology should be treated as a running record throughout proceedings.

  • Update after each hearing.
  • Update after significant incidents.
  • Review monthly in ongoing cases.
  • Ensure bundle page references remain accurate after pagination.

Common Mistakes to Avoid

  • Writing essays instead of entries.
  • Failing to reference evidence.
  • Using inflammatory language.
  • Listing trivial disputes.
  • Forgetting to update page references after bundle revisions.

Using Chronologies Strategically

A chronology is not just administrative.

It can:

  • Reveal patterns of escalation.
  • Highlight non-compliance.
  • Demonstrate consistency.
  • Identify gaps in evidence.
  • Support applications for fact-finding hearings.

Used correctly, it sharpens your advocacy.


Conclusion

Chronologies are often the backbone of judicial understanding.

When structured properly — factual, concise, cross-referenced and regularly updated — they crystallise the issues before the court.

Litigants in person who master chronology drafting gain procedural confidence and strategic clarity.


Book a 15-Minute Consultation

If you would like assistance structuring your chronology or preparing it for filing:


Regulatory & Editorial Notice

This article is provided for general information only and does not constitute legal advice. Every case depends on its own facts and procedural history.

JSH Law provides litigation support services to litigants in person. JSH Law is not a firm of solicitors and does not undertake reserved legal activities.

Family Court Tools, Templates & Research Support for Litigants in Person (UK Guide)

Family Court is not won by emotion or volume — it is navigated through structure. For litigants in person, the absence of formal legal representation does not mean the absence of strategy. The right tools, templates and targeted legal research can transform overwhelm into clarity. From chronologies and witness statement frameworks to safeguarding checklists and case law summaries, structured preparation enables you to focus on what the court must actually decide. This guide explains what practical tools are available, how they support compliance with the Family Procedure Rules 2010, and how disciplined preparation strengthens credibility and confidence throughout proceedings.

Family Court Tools, Templates & Research Support for Litigants in Person (UK Guide)

Key Takeaways for Litigants in Person

  • Structure wins cases — not volume. The right template can transform clarity.
  • Checklists prevent missed deadlines and procedural mistakes.
  • Targeted legal research strengthens credibility and focus.
  • Understanding leading cases helps you frame arguments correctly.
  • Evidence mapping and chronology tools reduce overwhelm.
  • Professional templates should align with the Family Procedure Rules 2010 and safeguarding guidance.

Introduction: Structure Creates Confidence

Family Court can feel chaotic. Emotions run high. Documents multiply. Deadlines approach quickly. For litigants in person, the greatest disadvantage is rarely intelligence or commitment — it is structural clarity.

Tools, templates and structured research change that dynamic.

This category is designed to provide practical frameworks: checklists, drafting guides, evidence tools and case summaries that help you approach proceedings methodically rather than reactively.

Templates are not shortcuts. They are scaffolding. They allow you to focus on substance rather than formatting.


Why Tools and Templates Matter in Family Proceedings

Family Court is governed by the Family Procedure Rules 2010. Judges expect compliance, proportionality and clarity.

Common problems for litigants in person include:

  • Overlong witness statements
  • Disorganised evidence
  • Missed directions
  • Emotion-led drafting
  • Failure to align arguments with legal tests

Templates and structured tools reduce these risks.


What We Provide: Practical Tools for Family Court

1. Chronology Templates

  • Date / Event / Evidence Reference structure
  • Issue-based chronologies
  • Safeguarding-focused timelines
  • Financial disclosure timelines

Chronologies are often the backbone of judicial understanding.

2. Witness Statement Frameworks

  • Clear heading structure
  • Issue-by-issue response format
  • Exhibit referencing guidance
  • PD12J safeguarding alignment (where relevant)

3. Position Statement Templates

  • Orders sought
  • Issues in dispute
  • Key evidence references
  • Welfare checklist alignment

4. Evidence Mapping Tools

  • Allegation → Evidence → Legal relevance table
  • Bundle page reference trackers
  • Cross-examination preparation sheets

5. Hearing Preparation Checklists

  • FHDRA checklist
  • Fact-finding preparation sheet
  • Final hearing readiness audit
  • Remote hearing technical checklist

6. Disclosure & Financial Remedy Tools

  • Form E preparation checklist
  • Section 25 factor analysis sheet
  • Asset tracking template
  • Schedule of assets summary format

7. Safeguarding & Domestic Abuse Templates

  • Scott Schedule drafting guide
  • PD12J compliance checklist
  • Child impact analysis worksheet
  • Contact risk assessment structure

Research Support: Understanding the Law Behind Your Case

Templates provide structure. Research provides authority.

We assist litigants in understanding:

  • The Children Act 1989
  • Welfare checklist application
  • Practice Direction 12J (Domestic Abuse)
  • Practice Direction 27A (Bundles)
  • Case management principles
  • Financial remedy factors under s.25 MCA 1973

Research should answer one question: how does this authority support or limit your argument?


Understanding Key Case Law

Many litigants refer to “case law” without understanding what is binding and what is persuasive.

We help interpret leading authorities relevant to:

  • Parental alienation claims
  • Domestic abuse fact-finding
  • Relocation applications
  • Enforcement of child arrangements
  • Financial non-disclosure

Understanding precedent ensures arguments are framed correctly.


AI-Assisted Organisation Tools

Modern litigation benefits from technology.

  • Document indexing automation
  • Timeline extraction from message logs
  • Pattern analysis in communications
  • Bundle structuring guidance

Technology does not replace judgment — it enhances organisation.


Templates We Commonly Draft

  • Pre-hearing email to court
  • Application covering letters
  • Chronology summaries
  • Position statements
  • Fact-finding issue schedules
  • Costs schedules (where applicable)
  • Appeal notice guidance (procedural support)

Common Mistakes Templates Help Prevent

  • Repetition instead of relevance
  • Emotional narrative without evidence
  • Failure to link evidence to legal test
  • Procedural non-compliance
  • Overloading bundles

Templates enforce discipline.


How Research Strengthens Credibility

Judges respond to structured argument anchored in authority.

For example:

  • Aligning submissions with the welfare checklist
  • Identifying risk analysis principles in safeguarding cases
  • Understanding proportionality in contact disputes

Legal authority is not decoration — it is foundation.


Checklists That Reduce Anxiety

Many litigants experience procedural anxiety. Checklists reduce uncertainty:

  • What must I file?
  • By when?
  • In what format?
  • With what attachments?

Preparedness creates confidence.


Case Understanding Support

We help litigants understand:

  • What type of hearing they are attending
  • What the judge is deciding
  • What evidence is relevant
  • What realistic outcomes look like

Clarity prevents unrealistic expectations.


Why This Category Exists

Access to justice depends on practical empowerment.

Legal information alone is insufficient.

Litigants need tools — not just explanations.


How JSH Law Approaches Tools & Templates

  • Aligned to current procedural rules
  • Safeguarding aware
  • Proportionate and focused
  • Structured for clarity
  • Designed for litigants in person

Templates should not inflame conflict. They should improve precision.


Book a 15-Minute Consultation

If you need structured tools or research support tailored to your case, you can book a short consultation.


Useful Links


Regulatory & Editorial Notice

This article is provided for general information only and does not constitute legal advice. Every case depends on its own facts and procedural history.

JSH Law provides litigation support services to litigants in person. JSH Law is not a firm of solicitors and does not undertake reserved legal activities.

The “Vibe Lawyer” Moment: AI, Litigants in Person, and the Coming Shockwave for the Family Courts

Litigants in person are being called “vibe lawyers” for using AI to draft complaints and court documents. But behind the headlines lies a harder truth: people are turning to artificial intelligence because they cannot afford representation in an increasingly complex and overstretched justice system. Judges are right to be concerned about fake citations and procedural errors. Yet dismissing AI use outright misses the deeper issue — access to justice has been under strain for years, and technology is now filling the gap.

The “Vibe Lawyer” Moment: AI, Litigants in Person, and the Coming Shockwave for the Family Courts

By Jessica Susan Hill | JSH Law

Key Takeaways (Read This First)

  • AI is already changing litigation behaviour — the judiciary is explicitly preparing for a surge in AI-generated claims across civil, family and tribunals.
  • The risk isn’t “AI” — it’s unverified AI: fabricated authorities and confidently wrong submissions waste court time and damage credibility.
  • LiPs are not “wreaking havoc” for fun. Many are doing what they must to participate in a system they cannot afford to navigate with representation.
  • The solution is guardrails, not barriers: verification standards, procedural literacy, and responsible workflows that help the court as well as the litigant.
  • Family proceedings are high-stakes. Used properly, AI can improve clarity and evidence organisation; used badly, it can derail safeguarding analysis and case management.

1. Why this matters now

“Vibe lawyers” is a catchy label, but it risks obscuring a far more serious reality: litigants in person are using AI tools to draft complaints, defences, witness statements and skeleton arguments at scale — and the courts are already feeling the impact. The phenomenon is now so visible that Sir Geoffrey Vos (Master of the Rolls, Head of Civil Justice) has explicitly warned that the judiciary must prepare for an “AI revolution” that may vastly increase the number of civil, family and tribunal claims the justice system must manage. His speech is worth reading in full. :contentReference[oaicite:0]{index=0}

Let’s be direct: the justice system in England and Wales is already stretched. Many court users already experience the process as opaque, intimidating and unaffordable. That is not a personal failing of litigants — it is a structural reality. AI is entering a pressure-cooker and magnifying what was already there: information asymmetry, procedural complexity, delay and the gulf between a represented party and an unrepresented one.

So, yes — judges and practitioners are right to be concerned about inaccurate AI-generated material clogging lists and adding burden to judges who are already firefighting. But it is also true that, in the medium term, AI could become one of the most significant access-to-justice tools we have ever seen. Both truths can exist at once.

2. The judiciary is not guessing — it is responding to lived reality

We are past the point of theoretical debate. The judiciary has been issuing speeches and guidance precisely because AI use is now operationally relevant. Beyond speeches, the Judicial Office has published updated guidance addressing risks including confidentiality, bias and “hallucinations” — where AI produces plausible but incorrect information. The October 2025 judicial guidance explicitly flags the danger of fictitious citations and misleading legal content. :contentReference[oaicite:1]{index=1}

Sir Geoffrey Vos has also repeatedly articulated a simple “core rules” approach: understand what the tool is doing, do not upload private/confidential data into public tools, and check the output before using it for any purpose. He set that out again in October 2025. :contentReference[oaicite:2]{index=2}

This is not anti-technology. It is the judiciary doing what it should do: protecting the integrity of the process while acknowledging that new tools are changing behaviour.

3. The real problem: “confidently wrong” submissions

Generative AI tools can draft impressive text quickly. But they do not “know” the law. They predict language. That difference matters profoundly in litigation. A well-written paragraph that contains an invented case, a misquoted statute or an inaccurate procedural route is not merely unhelpful — it can actively undermine a party’s credibility and force the court to spend additional time cleaning up the mess.

The legal profession has already seen what happens when verification fails. In June 2025, the Divisional Court (Dame Victoria Sharp P and Johnson J) dealt with the now widely-reported “fake authorities” problem in Ayinde v London Borough of Haringey and Al-Haroun v Qatar National Bank, where false citations and inaccurate quotations were placed before the court, with suspected or admitted use of AI tools without proper checks. The judgment is publicly available and makes required reading for anyone tempted to treat AI output as “good enough”. :contentReference[oaicite:3]{index=3}

Importantly, that judgment is aimed at lawyers — because professionals are held to professional standards. But the underlying point applies to everyone: accuracy is non-negotiable in court work. You can be passionate, traumatised, exhausted, and still required to file documents that are factually and legally sound.

4. Why litigants in person are using AI (and why the “money pit” narrative is wrong)

Many litigants in person feel they are treated as an administrative inconvenience — or worse, as a “cost centre” rather than a rights-holder. I understand why that perception forms. The system can be brutal: forms, deadlines, practice directions, directions hearings, orders you must interpret and comply with under stress. In private law children proceedings, you may be trying to protect a child, manage safeguarding concerns, and preserve your own mental stability while preparing documents that lawyers train for years to produce.

For a growing number of people, AI has become the first accessible “translator” of legal language. It can explain terminology, propose a structure for a statement, generate headings for a skeleton argument, and help a person who feels overwhelmed take a first step. That is why it feels like a shake-up. It is not because LiPs are trying to harm the system. It is because they are trying to participate in it.

And here is the hard truth: if access to representation continues to shrink in practice — whether by cost, availability, or scope — more people will use AI. That is not something a press headline can reverse. It is a reality the system must incorporate.

5. Family court is the pressure point

Family proceedings are where AI misuse can become most dangerous, because the stakes are often immediate and human: the child’s living arrangements, contact, safeguarding, allegations of domestic abuse, coercive control, substance misuse, mental health, relocation, schooling — the list is endless.

Private law children cases are ultimately governed by the welfare principle in the Children Act 1989, section 1. The court’s job is not to reward the best writer. It is to determine what best meets the child’s welfare needs. But poor drafting can still distort the court’s understanding of what matters. :contentReference[oaicite:4]{index=4}

And family procedure is its own ecosystem. The Family Procedure Rules and associated Practice Directions are not optional reading; they are the architecture of how your case moves through the system. PD12J (domestic abuse and harm) is particularly critical where abuse is alleged, because it shapes fact-finding decisions, safeguarding analysis and protective measures. :contentReference[oaicite:5]{index=5}

Where AI is used badly in family court, I commonly see the same patterns (and judges see them too):

  • Misstating legal tests (e.g., confusing civil and criminal standards, or quoting the wrong threshold framework).
  • Over-inclusion: 30-page narratives where only a small percentage is evidentially relevant.
  • Inflammatory language that escalates conflict rather than centring the child.
  • Procedural fantasy: “applications” and “orders” that do not exist or are not procedurally available.
  • Fake authority: citations that sound real but are not verifiable.

Those problems do not just “waste time”. They can change outcomes. They can harden judicial perceptions. They can reduce a litigant’s credibility. And in safeguarding contexts, credibility matters.

6. But here is the opportunity: structured AI use can help the court

Now for the other side of the ledger, which the “vibe lawyer” framing often ignores.

Used properly, AI can reduce noise and increase clarity. It can help an overwhelmed litigant present their case in a way that judges can actually work with. It can support:

  • Chronology building (dates, events, orders, and key turning points).
  • Document organisation (indexes, exhibit lists, consistent naming).
  • Issue framing (what is the dispute actually about?).
  • Drafting clarity (headings, structure, neutral tone).
  • Summarising communications (WhatsApp/SMS/email) into court-usable bundles.

Those are not cosmetic benefits. They are directly aligned with what the court needs: efficient case management, focused evidence, and parties who can articulate relevant issues.

In other words: the best version of AI in litigation is not “AI replaces lawyers.” It is “AI helps people present usable material so the court can do its job.” That is the access-to-justice promise.

7. The non-negotiable: verification

The line between empowerment and chaos is verification.

Professional regulators have been clear that AI cannot be trusted to judge its own accuracy. The SRA has warned about hallucinations and the risk of plausible but incorrect outputs, including non-existent cases. :contentReference[oaicite:6]{index=6}

For court users, this translates into a simple operating standard:

  • If you cite it, you must be able to prove it exists (case name, neutral citation, and a reliable source).
  • If you quote a statute, check it on legislation.gov.uk (not in an AI chat box).
  • If you refer to rules or practice directions, check the official source (FPR/CPR/PD pages).
  • If it sounds “too perfect”, slow down — AI is very good at confidence, not always good at truth.

After the June 2025 “fake authorities” judgment, the direction of travel is obvious: courts will increasingly treat fabricated or careless citations as serious misconduct where professionals are involved, and as a significant credibility issue where litigants are involved. :contentReference[oaicite:7]{index=7}

8. A real-world cautionary tale: Mata v Avianca

Even outside the UK, courts have reacted strongly when lawyers filed AI-generated fake authorities. The widely-cited US case Mata v Avianca resulted in sanctions after fabricated case citations were submitted. It is not “UK law”, but it is a stark illustration of what happens when verification collapses. :contentReference[oaicite:8]{index=8}

Why mention it here? Because the underlying professional lesson travels: courts do not have time for invented law, and they should not have to spend scarce judicial time correcting avoidable errors.

9. What this means for litigants in person

What This Means for LiPs (Practical Guidance)

1) Use AI to organise, not to “source” law. AI is excellent for structure, headings, summaries, chronologies and drafting tone. It is unreliable as a sole source of legal authority.

2) Keep it child-focused (family cases). Remove insult, speculation and “character assassination”. Judges need facts, evidence, and impact on the child.

3) Treat every AI output as a draft. You are responsible for what you file. Read it. Edit it. Make sure it matches your evidence.

4) Verify every citation. If you cannot open the case or locate it on a reputable database, do not rely on it.

5) Don’t upload confidential material into public AI tools. Safeguarding details and private communications should be handled carefully. Follow the Judicial Office warnings on confidentiality. :contentReference[oaicite:9]{index=9}

6) Aim for shorter, clearer documents. Judges do not reward length. They reward relevance. A focused 6–10 pages often lands better than a sprawling 30.

7) If you’re stuck, get human oversight. A short consultation to sanity-check structure, compliance with directions, and relevance can prevent months of damage.

10. What this means for the justice system: guardrails, not barriers

If the system responds to AI by “closing ranks” and shaming litigants, it will fail. People will still use AI — but they will do so in worse, more chaotic ways. A better approach is to develop common standards that increase quality and reduce burden.

In practice, that means three things.

A) Judicial clarity

Courts and judiciary leadership can help by setting clear expectations about what is acceptable in written submissions — particularly around citation verification and disclosure of AI use where relevant. The Judicial Office guidance is already laying the foundation here. :contentReference[oaicite:10]{index=10}

B) Procedural literacy for court users

Most problems I see are not “bad people”. They are overwhelmed people. The system needs short, accessible, official pathways explaining (for example) what a directions hearing is, how to comply with an order, how to prepare a bundle, and how to draft a witness statement that is relevant rather than reactive.

C) Responsible support models

This is where the best “shake up” lies: hybrid support that uses AI to accelerate organisation and drafting, with human oversight to ensure compliance, accuracy, relevance and tone. That model benefits everyone: the litigant, the other party, and the court.

11. A note on professional standards (and why it still matters to LiPs)

When professionals file inaccurate material, the consequences can be severe, including regulatory referral. That was made explicit in the June 2025 judgment dealing with false citations. :contentReference[oaicite:11]{index=11}

LiPs are not held to the same professional code — but the practical consequences can still be harsh: credibility erosion, judicial impatience, adverse costs risks in some contexts, and (most importantly) a judge simply not trusting what they are reading. In family court, loss of credibility can be profoundly damaging.

This is why “AI literacy” is not an academic luxury. It is a procedural survival skill.

12. Conclusion: the future is responsible AI, not no AI

AI is in the courtroom ecosystem now. The judiciary is preparing for it. Regulators are warning about it. The profession is adapting to it. The question is not whether litigants in person will use AI — they already are.

The question is whether we will build a culture of responsible use.

Used recklessly, AI produces noise: invented authorities, misunderstood legal tests, and sprawling submissions that burden the court. Used properly, it can produce clarity: structured chronologies, coherent statements, and focused issues that help the court get to the real substance of the case.

If we care about access to justice, we cannot treat litigants in person as an administrative irritation. We should treat them as court users with rights and responsibilities — and we should equip them with tools and guardrails that allow them to participate meaningfully.

That is the “AI revolution” that matters: not chaos, but capability.


Useful Official Resources

If you want structured, responsible help using AI to prepare court documents (without risking accuracy or credibility), you can book a short consultation below:


Regulatory & Editorial Notice (JSH Law): This article is published for general information and public-interest commentary only. It does not constitute legal advice and should not be relied upon as such. Where this article refers to third-party sources (including court judgments, guidance, regulator publications, media reporting, or external organisations), those references are provided for context and convenience; JSH Law does not control or endorse third-party content and cannot guarantee its accuracy, completeness, or continued availability. Court users should always consult the original primary sources (including the Family Procedure Rules, Practice Directions, and judgments) and obtain appropriate professional advice for their specific circumstances.

When Court Data Disappears: Why Transparency in Family Courts Matters More Than Ever

In February 2026, the Ministry of Justice ordered the removal of a major archive of court listing data, citing data protection concerns and alleged misuse involving AI. On the surface, it looked like a dispute about compliance. In reality, it raises a far more serious question: what happens when the justice system becomes less visible? For families navigating private law disputes, safeguarding allegations and prolonged delay, transparency is not a political slogan — it is the difference between understanding how the system works and feeling powerless within it.

Key points (read this first)

  • “Open justice” is not a vibe. It is a constitutional principle: the public must be able to see justice being done — in practice, not just in theory.
  • The Courtsdesk database mattered because it made magistrates’ court activity discoverable at scale — across regions, trends and time — in a way ordinary listings often do not.
  • The MoJ/HMCTS position has centred on data protection and alleged unauthorised sharing with an AI third party (including potentially sensitive identifiers). That is a serious issue — but it doesn’t automatically justify a “delete the archive” outcome.
  • There is now a live policy tension: privacy compliance vs public scrutiny. The correct answer is not to pick one. It is to design lawful access with safeguards.
  • AI changes the stakes. It can expose systemic court failures (delays, inconsistency, outcomes), but it can also amplify privacy harm if governance is weak.
  • What to watch next: licensing frameworks, official listing portals, retention/archiving rules, and whether any independent oversight is built into the “new” regime.

If you only have 60 seconds: the question isn’t “should court data exist?” — it’s “who controls access, under what rules, with what accountability?”

When Court Data Disappears: Courtsdesk, the MoJ Deletion Order, and What “Open Justice” Means in the AI Age

By Jessica Susan Hill | Legal Consultant & McKenzie Friend | JSH Law Ltd

In February 2026, a story surfaced that should make every lawyer, journalist and court-user sit up: the Ministry of Justice (via HMCTS) instructed a private platform, Courtsdesk, to delete what was widely described as the UK’s largest archive of court reporting data. The dispute was framed as a data protection breach involving AI. Critics called it a major blow to open justice.

This isn’t a niche media row. It’s a governance problem with a constitutional wrapper. Because once court information becomes searchable at scale, it becomes auditable. And once the system becomes auditable, it becomes accountable.

1) What happened — and why the link you saw may have “stopped working”

If you clicked a share link to a paywalled newspaper, you’ll often get a broken experience (or a login wall). But the underlying issue is very real: in early-to-mid February 2026, multiple sources reported that the MoJ/HMCTS instructed Courtsdesk to remove court listing/archival data from its platform. The matter was then debated in Parliament, with ministers stating that action was taken because of data protection concerns and alleged unauthorised sharing with an AI company.

In the House of Commons debate on 10 February 2026, the government position was put bluntly: HMCTS stopped sharing data and instructed the company to remove data from its digital platform because the government considered personal data had been put at risk and/or shared in breach of agreement. (Hansard: “Court Reporting Data”). Read the Commons debate (Hansard).

The House of Lords revisited similar themes on 11 February 2026, referencing alleged sharing of “private, personal and legally sensitive information” with a third-party AI company, including potentially addresses and dates of birth of defendants and victims. Read the Lords debate (Hansard).

Meanwhile, journalist bodies and open justice advocates argued that the deletion demand would reduce practical visibility of magistrates’ courts — the engine room of criminal justice — and undermine reporting capacity nationwide. NUJ response (11 Feb 2026).

Subsequent coverage indicated that the government later paused the deletion/purge approach and explored alternative licensing or arrangements, following significant public pressure and campaigning (including within national media). One example: The Times: MoJ halts purge of court archive (published Feb 2026). (Paywalled, but relevant for context and sequence.)

2) What is Courtsdesk — and why journalists cared

Courtsdesk is typically described as a platform that made it easier for journalists to discover and track magistrates’ court hearings — and to keep a searchable archive of what had been listed. The word “archive” matters. Without it, reporting becomes a daily scramble: you can see “today’s” list (sometimes), but you cannot easily analyse what happened across a month, a year, or a decade, and you cannot robustly check what patterns repeat across courts.

That changes the reporting model. Instead of “we got a tip and attended a hearing”, journalists can ask structured questions like:

  • Which courts are repeatedly listing the same offence type and outcome?
  • Are there geographical disparities in sentencing outcomes (controlling for offence and prior record)?
  • Is a particular safeguarding issue rising (domestic abuse, coercive control, breaches, stalking)?
  • Are certain hearings routinely not listed, listed late, or listed inaccurately?
  • Are “open” hearings being effectively closed by practical invisibility?

In short: a discoverable, searchable dataset turns open justice into something measurable. That is precisely why both open justice advocates and public interest reporters reacted so strongly.

For a short overview of the controversy as reported at the time: Legal Cheek (11 Feb 2026). For a more analytical legal-media perspective: Wiggin LLP commentary (16 Feb 2026).

3) The MoJ/HMCTS case: “data protection” and alleged sharing with AI

The government’s public position, as reflected in parliamentary statements, has been that data protection responsibilities were engaged. The allegation was not merely that the data existed, but that data was used or shared in a way that was not authorised by the relevant agreement — and that the information at issue could include sensitive personal identifiers.

In the Commons debate, MPs referenced the passing of information to an AI company, including addresses and dates of birth. You can read the relevant passages directly in Hansard: Court Reporting Data (Commons, 10 Feb 2026). The Lords debate similarly framed the core concern as sharing private/personal legally sensitive information with a third-party AI company: Court Reporting Data (Lords, 11 Feb 2026).

Let’s be clear: if victim or defendant identifiers were exposed or processed without a lawful basis, proper security, or appropriate contractual control, that is not a minor technicality. UK GDPR compliance is not optional — particularly where data could create direct risk (victim location, stalking risk, retaliation, intimidation, vigilante harm).

But there is a second question — and this is where policy and constitutional principles collide: even if a breach occurred, does the proportionate remedy have to be “delete the archive”? Or is the correct remedy:

  • Stop the unauthorised processing,
  • Investigate,
  • Implement governance, redaction, licensing and audit controls,
  • And preserve the public-interest value of the dataset?

In other regulated sectors, “burn the library” is rarely considered an intelligent response to a governance failure. You fix governance. You don’t erase institutional memory.

4) What “open justice” actually requires (and what it doesn’t)

“Open justice” is often described as a constitutional principle in common law: justice must be administered in public, with reporting permitted, because scrutiny is a safeguard against arbitrariness and abuse. It supports legitimacy and public confidence.

But open justice is not absolute. Courts can restrict reporting, anonymise parties, hold parts of hearings in private, or impose reporting restrictions where necessary and proportionate — especially to protect children, victims, national security, or the integrity of proceedings.

Here’s the practical point: open justice collapses when information is technically “available” but realistically undiscoverable. If court lists are incomplete, delayed, inaccurate, scattered, or accessible only through relationships and workarounds, then public scrutiny becomes selective and fragile.

A searchable archive changes the baseline. It doesn’t guarantee perfect scrutiny, but it makes scrutiny possible at scale.

The NUJ response captures the concern in direct terms: the state must take data protection seriously, but journalists are worried about the effect on their ability to do their job. NUJ: deletion order response.

5) The real issue: discoverability, not secrecy

Most people misunderstand how court reporting works. They think journalists can simply “look up” what is happening in court.

In practice, magistrates’ courts are high-volume. Hearings move. Lists change. Data may be published late, inconsistently, or in formats that are difficult to search. Court staff are under pressure. Press offices (where they exist) are stretched. The result is that what is formally “public” can become practically opaque.

So when people say “this undermines open justice,” they may not mean “the government is hiding a single case.” They mean: remove the infrastructure of discoverability and you reduce systemic scrutiny.

The wider concern is that once the system is not audited at scale, dysfunctional patterns persist:

  • Overlisting and adjournment churn;
  • Chronic delay;
  • Inconsistent listing practices;
  • Variable use of reporting restrictions;
  • Localised cultures that drift without challenge.

This is where AI becomes relevant — not as hype, but as a tool. AI is exceptionally good at extracting patterns from messy, fragmented data. And patterns are exactly what the justice system needs to be forced to confront.

6) AI: the uncomfortable accelerator of accountability

Here is the uncomfortable truth: AI makes “open justice” more powerful, because it can transform raw listings and outcomes into insight:

  • Where are outcomes diverging without explanation?
  • Which courts are systematically underperforming on timeliness?
  • Which offence types are rising or falling?
  • Do bail decisions correlate with geography in ways that look unjustified?
  • Are certain safeguarding concerns being deprioritised?

For the public, this can mean better scrutiny and informed reform. For institutions, it can feel like a loss of narrative control.

But AI also increases privacy risk. Aggregation is a form of power: data that is safe in one context can become dangerous in another when combined, enriched, or made searchable. That is why governance matters.

The question is not “AI or no AI.” It is: who is allowed to process court data with AI, under what licence, with what redaction, with what audit trail, and with what sanctions for misuse?

7) Data protection and open justice can coexist — if you design for both

If there was an unauthorised transfer of personal data to a third-party AI provider, that needs to be addressed. Strongly. But the correct fix is not necessarily deletion. The correct fix is a governance framework that takes seriously both:

  1. Lawful processing and security (UK GDPR; DPA 2018; contractual controls; access logs; DPIAs); and
  2. Open justice functions (discoverability; auditability; press access; public interest research).

A mature framework would include:

(A) Role-based access

Not everyone needs the same level of detail. A press-accredited journalist may need more than the general public. An academic researcher may need a structured dataset but not identifiers. A safety model is tiered access with clear rules.

(B) Default minimisation and redaction

Listings can be published in a way that is still meaningful but reduces harm: names may be necessary for open justice in many cases, but addresses and dates of birth generally aren’t. A “privacy by design” listing format is possible.

(C) Contractual control over processors

If AI tools are used, the relationship between controller and processor must be contractually controlled, audited, and limited. “Testing” is still processing. “Internal development” is still processing.

(D) Audit logs and sanctions

If a platform is given access to sensitive data, there must be a reliable audit trail and enforceable consequences for misuse.

This is the kind of approach the state should model. It’s what we demand of the private sector. The justice system should not be a governance laggard.

8) “Just use official channels” is not a sufficient answer

One argument raised in public discussion is that journalists can still access listings through official HMCTS channels, so the deletion of a private archive is not fatal.

Here’s the hard reality: official availability does not necessarily equal practical usability. The difference between:

  • a fragmented set of daily lists, and
  • a searchable, longitudinal archive

is the difference between “seeing a hearing” and “auditing a system”.

It’s the audit function that scares people — and it’s the audit function that reform needs.

For contemporaneous legal-sector analysis and a timeline-style overview, see: Wiggin LLP commentary.

9) The proportionality question: why “delete it” feels extreme

When government acts, it must act proportionately — especially when its actions collide with constitutional principles.

If the problem was a specific breach, a proportionate response normally looks like:

  • Stop the unlawful processing immediately;
  • Preserve evidence;
  • Investigate scope and impact;
  • Notify where legally required;
  • Fix governance;
  • Implement redaction and access controls;
  • Resume service under a compliant licence.

Deleting a historic archive can be justified in certain cases — for example, if the archive itself is irredeemably unsafe and cannot be lawfully held. But that is a high threshold. And if that threshold is met, the next question is: why was the data shared in that form in the first place, and why was it not already governed appropriately?

Open justice is a public asset. When you destroy an archive that underpins scrutiny, you don’t merely “solve” a compliance problem — you erase a public accountability mechanism.

10) What this means for litigants, victims and the public

This is not only about journalists. It touches:

Victims and vulnerable witnesses

Privacy matters. Safety matters. If addresses/DoBs are handled recklessly, it can cause real-world harm. A governance regime must centre safeguarding and risk. The state is right to be strict about that.

Defendants

Defendants have rights too. Public identification can be lawful and appropriate in open court, but bulk data aggregation can create long-tail harm (employment, housing, vigilantism), particularly where cases end in acquittal or discontinuance. This is why minimisation and careful retention rules matter.

The public

The public interest in open justice is not abstract. It includes the ability to scrutinise how domestic abuse is treated, how repeat offenders are sentenced, how grooming cases are prosecuted, and whether systemic failures are being ignored.

The debate is often framed as “privacy vs transparency.” A better framing is: “privacy and transparency with engineering-grade governance.”

11) A practical blueprint for a lawful court data ecosystem

If we want open justice that survives the AI era, we need to stop improvising and start designing. Here is a blueprint that would satisfy most of the legitimate concerns on all sides:

  1. Define a canonical “public listing dataset” with minimised fields (no addresses; no full DoB; protect victims by default where appropriate).
  2. Publish in a consistent, machine-readable format so that “discoverability” is not dependent on private scraping or informal relationships.
  3. Implement a press and research licence with tiered access, clear contractual controls, audit logs, and enforcement.
  4. Create a secure research environment (think “data safe haven”) where higher-sensitivity data can be used for public-interest research under supervision.
  5. Mandate DPIAs for any new processing at scale, including any AI model training or automated analytics.
  6. Independent oversight: an external advisory panel including press, victims’ advocates, privacy experts and court users.

If you work in legal ops, you’ll recognise this: it is the same control architecture we use for health data, financial data, and regulated client data. The justice system deserves no less.

12) What you can do if you care about this

  • Read the parliamentary record and compare the stated rationale with the real-world impact: Commons Hansard (10 Feb 2026) and Lords Hansard (11 Feb 2026).
  • Track journalist-body positions (NUJ is a good start): NUJ statement.
  • Ask the right question of policymakers: “What is the new lawful access model — and who is responsible for ensuring discoverability in practice?”
  • Watch for licensing/market engagement notices and consultation opportunities. (Legal commentary sites often summarise these quickly.)
  • If you are a court user or practitioner, keep records. Transparency is partly built from bottom-up documentation — hearing notices, listings, orders, reasons, and procedural history.

Because here is the punchline: if the system cannot be seen, it cannot be improved. And if it cannot be improved, it cannot be trusted.

Sources and further reading

Regulatory & Editorial Notice (JSH Law Ltd)

This article is published for general information and public-interest commentary only. It does not constitute legal advice and should not be relied upon as such. JSH Law Ltd is not a firm of solicitors and does not provide regulated legal services. If you require legal advice, you should consult a suitably qualified and regulated legal professional.

Where this article refers to third-party reporting, parliamentary materials, organisations, or public cases, it does so for journalistic, educational, and research purposes. External links are provided for reader convenience; JSH Law Ltd is not responsible for the content of external sites.

© JSH Law Ltd | Company No. 16870438 | Manchester (UK) & Kansas (USA)

The Use of AI in Preparing Court Documents: Why the Civil Justice Council Consultation Matters

The Civil Justice Council has launched an eight-week consultation examining whether new rules are needed to regulate the use of artificial intelligence in preparing court documents. Chaired by Lord Justice Birss, the Working Group is considering whether safeguards or formal declarations should apply when legal representatives use AI to draft pleadings, witness statements and expert reports. The consultation recognises both the efficiency benefits of AI and the risks of hallucinated case citations, fabricated authorities and evidential integrity concerns. Particular focus is placed on witness statements and expert evidence, where authenticity is central to the administration of justice. The consultation closes on 14 April 2026. This article explains what is being proposed, why it matters for litigants in person and legal professionals, and how responsible AI use can strengthen — rather than undermine — credibility in court proceedings. PDF here.

The Use of AI in Preparing Court Documents: Why the Civil Justice Council Consultation Matters

Category: AI & Law / Procedural Updates  |  Audience: Litigants in Person & Legal Professionals (England & Wales)

Key takeaways for litigants in person

  • The Civil Justice Council (CJC) is consulting on whether rules should govern the use of AI in preparing court documents.
  • The consultation closes on 14 April 2026.
  • Proposals include possible declarations where AI has been used to generate substantive content.
  • Administrative uses (spell-check, transcription, formatting) are unlikely to require disclosure.
  • Witness statements and expert reports are likely to face stricter safeguards.

What Is This Consultation About?

The Civil Justice Council (CJC) has published an Interim Report and opened an eight-week consultation examining whether procedural rules are needed to regulate the use of artificial intelligence in preparing court documents.

The Working Group is chaired by Lord Justice Birss and includes members of the judiciary, the Bar Council, the Law Society and academic representatives.

The core question is simple but significant:

Should formal rules govern how legal representatives use AI when preparing pleadings, witness statements, skeleton arguments and expert reports?

The consultation paper explains that AI has enormous potential benefits — but also significant risks, particularly around hallucinated case citations, fabricated material and evidential integrity.

Why This Matters

AI is already being used across the legal sector for:

  • Legal research
  • Drafting pleadings
  • Preparing skeleton arguments
  • Summarising disclosure
  • Drafting witness statements
  • Generating expert reports

The consultation recognises that while AI improves efficiency and access to justice, it also introduces risks including:

  • Hallucinated case citations
  • Invented legal authorities
  • Embedded bias in generated content
  • Deepfake or manipulated evidence
  • Hidden metadata (“white text”) manipulation

The administration of justice depends on reliability. If courts cannot trust documents filed before them, confidence in the system erodes.

What the Working Group Proposes

The consultation distinguishes between:

  • Administrative uses (spell-check, formatting, transcription, accessibility tools)
  • Substantive generative uses (AI drafting legal argument, evidence, or expert analysis)

The Working Group’s emerging position suggests:

  • No additional rule required for statements of case or skeleton arguments, provided a legal professional takes responsibility.
  • Stricter controls for witness statements, particularly trial statements.
  • Possible declarations confirming AI has not generated witness evidence.
  • Amendments to expert report statements of truth to require disclosure of AI use.

Witness Statements: The Most Sensitive Area

The report strongly indicates that generative AI should not be used to create or alter substantive witness evidence.

The concern is straightforward:

  • Witness statements must be in the witness’s own words.
  • AI “improving” phrasing may alter tone, emphasis or meaning.
  • Courts rely heavily on authenticity.

The Working Group proposes a declaration that AI has not been used to generate, embellish or rephrase evidence in trial witness statements.

That is significant. It signals that evidential integrity is where regulation will likely concentrate.

Expert Reports: Transparency Rather Than Prohibition

Unlike witness statements, expert reports may legitimately use AI tools for:

  • Data analysis
  • Document extraction
  • Technical modelling

However, the consultation proposes that experts should disclose and explain any AI use beyond administrative functions.

The aim is transparency — not prohibition.

What About Litigants in Person?

Notably, this consultation does not focus on regulating litigants in person.

The paper recognises that many unrepresented parties may rely on AI as their only accessible form of legal assistance.

That presents a policy tension:

  • AI can improve access to justice.
  • But AI can generate inaccuracies.
  • Litigants may lack the expertise to verify output.

Any regulation must therefore balance fairness with accessibility.

Should There Be Mandatory AI Declarations?

International approaches vary. Some US courts require certification of AI use. Others do not.

The Working Group is cautious. It recognises that:

  • AI is rapidly integrating into legal software.
  • It may soon be impossible to distinguish “AI use”.
  • Over-regulation may increase delay and satellite litigation.

The likely direction appears to be:

  • No blanket declaration for routine drafting.
  • Targeted safeguards for evidence.
  • Clear professional responsibility.

Why This Consultation Is Forward-Looking

AI is not going away. The question is not whether it will be used — but how responsibly.

The consultation reflects a mature approach:

  • Encourage innovation.
  • Protect evidential integrity.
  • Preserve public confidence.
  • Avoid stifling access to justice.

That balance is critical.

How to Respond to the Consultation

The consultation closes on 14 April 2026.

Responses can be submitted by completing the consultation cover sheet and sending it to:

CJC.AI.consultation@judiciary.uk

Questions about the process can be directed to:

CJC@judiciary.uk

Responses may be submitted in Word or PDF format.

What This Means Practically

If you are preparing court documents using AI:

  • Verify all case citations manually.
  • Check statutory references independently.
  • Do not use AI to generate witness evidence.
  • Retain responsibility for every word filed.

AI is a tool. It is not a shield.

A Realistic Perspective

Used responsibly, AI enhances efficiency. Used carelessly, it damages credibility.

The Civil Justice Council is not proposing a ban. It is seeking proportionate governance.

That distinction matters.


Book a 15-minute consultation (phone)

If you are navigating litigation and considering using AI tools, or if you are concerned about AI-generated material in your case, you can book a 15-minute consultation below:

Technology should strengthen your case — not undermine it.


Regulatory & Editorial Notice

This article provides general commentary only and does not constitute legal advice. JSH Law provides litigation support services to litigants in person and does not conduct reserved legal activities. References to consultation materials are for informational purposes only.

You can download the pdf here : Interim-Report-and-Consultation-Use-of-AI-for-Preparing-Court-Documents-2.pdf

Access to Justice Will Not Improve Until Litigants in Person Are Treated as First-Class Legal Tech Users

Access to Justice Will Not Improve Until Litigants in Person Are Treated as First-Class Legal Tech Users

Why courts, regulators, and legal-tech designers must stop building only for lawyers

“Access to justice” is one of the most repeated phrases in modern legal reform — and one of the least honestly examined in day-to-day court reality.

Across England and Wales, litigants in person (LiPs) now make up a significant proportion of users in family proceedings, civil disputes, tribunals and administrative processes. Yet much of the system — and much of legal tech — still assumes that a lawyer is the default user, and the unrepresented party is the exception.

They are not.

LiPs are a structural feature of the justice landscape. Until courts, regulators, and legal-tech providers explicitly recognise LiPs as first-class stakeholders, “access to justice” will remain aspirational rather than operational.

Key takeaways

  • Litigants in person are not marginal — they are central to how courts now function.
  • Legal tech designed only for lawyers often creates disadvantage for LiPs.
  • Courts can reduce chaos by setting clearer procedural standards and roadmaps.
  • Regulators can unlock innovation by clarifying the line between navigation support and legal advice.
  • Human-centred tools can improve compliance, fairness and efficiency without replacing lawyers.

1. The post-LASPO reality: LiPs are the system, not a problem within it

In a post-LASPO environment, it is common for one or both parties to be unrepresented. That reality increases pressure on judges, listing, court staff, and the opposing party (who may be represented). It also increases the risk of:

  • missed deadlines and procedural missteps
  • overlong or irrelevant bundles
  • adjournments and delay
  • hearings spent explaining process rather than determining issues
  • avoidable unfairness

These are not personal failings. They are predictable outcomes when systems are built around assumptions that no longer match real users.

2. Why most legal-tech tools fail litigants in person

Many tools that work well for professionals become actively unhelpful when applied to LiPs without redesign. Legal platforms typically assume users can:

  • interpret procedural stages and sequencing
  • identify which evidence is relevant (and why)
  • understand directions, service rules, and deadlines
  • use legal terminology accurately
  • separate emotion from issues and evidence

LiPs often cannot do those things consistently — not because they lack intelligence, but because the system is not taught, and the learning curve is steep under stress.

What this looks like in practice

When LiPs are unsupported, courts see repeat patterns: missed deadlines, misfiled documents, sprawling narratives, under-evidenced allegations, and confusion about what the court is deciding at each stage. These patterns are not random — they are design signals.

3. What courts must do: procedural clarity (not paternalism)

Courts are not powerless. A high-LiP environment requires courts to treat process design as part of justice delivery.

At minimum, courts should publish LiP-aware standards that clearly define:

  • core document types (e.g., chronology, statement, position statement, schedule of allegations/concerns where relevant)
  • what is needed at each stage (first hearing, directions, fact-finding, final hearing)
  • proportionality expectations for evidence and bundles
  • how to comply with directions and what happens if parties do not

Judges often explain process in court. The problem is inconsistency, stress, and the lack of a repeatable structure. Written roadmaps and standardised expectations reduce friction for everyone.

4. The regulator’s role: legitimising navigation tools without fear

One of the biggest barriers to LiP-focused legal tech is regulatory uncertainty. Developers and support services are often risk-averse because they fear crossing into “legal advice”.

Regulators can unlock responsible innovation by drawing a clearer line between:

  • procedural navigation (what the process is, what documents are, how to organise information, how to comply with directions), and
  • legal advice (what someone should do legally, the merits of their case, or how the court is likely to decide).

Navigation support vs legal advice (simple framework)

Usually safe procedural supportUsually crosses into legal advice
Process Explaining stages (e.g., directions → fact-finding → final hearing)
Compliance Helping track deadlines and service requirements
Organisation Structuring a chronology, index, exhibits, bundle sections
Plain English Translating court orders into clear tasks
Merits Advising whether someone should apply/oppose
Strategy Recommending what to plead or concede
Outcomes Predicting likely judicial findings/results
Representation Acting as if solicitor-client duties exist

5. What “LiP-first” legal tech actually looks like

LiP-centred legal tech does not have to be “AI giving legal advice”. The biggest gains come from tools that help people:

  • understand where they are in the process
  • know what is expected next
  • organise information coherently
  • comply with directions and deadlines
  • present evidence in proportionate, readable form

Simple flow diagram: How LiP-first tools reduce friction

Courts publish clear standardsDocument types, stage-by-stage roadmaps, proportionality, bundle structure.

Regulators clarify boundariesNavigation/compliance tools are legitimised; “legal advice” line is explicit.

Legal tech designs to the standardGuided workflows: timelines, bundles, checklists, deadlines, plain-English orders.

LiPs comply more easilyBetter documents, fewer adjournments, clearer issues, fairer hearings.

This is not about replacing lawyers. It’s about reducing avoidable failure points and making procedure intelligible.

6. Why co-design matters: building with, not for, litigants

The most credible way to improve tools for LiPs is co-design: courts, regulators, practitioners, support services, and litigants all informing the build. Without LiPs at the table, products will keep optimising for the wrong user — and courts will keep absorbing the cost.

7. The cost of doing nothing

When systems ignore their dominant user group, the impact is predictable:

  • longer hearings and heavier judicial case management
  • more procedural unfairness and inconsistent outcomes
  • greater emotional and financial harm (especially in family cases)
  • higher public cost through delay and repeat applications

LiP-first design is not only a fairness issue — it is a system efficiency issue.

8. A realistic path forward

Access to justice improves when:

  1. Courts set clear procedural standards and publish roadmaps designed for LiP reality.
  2. Regulators legitimise navigation and compliance tools, and make boundaries explicit.
  3. Legal-tech teams design for human understanding, not just professional efficiency.
  4. LiPs are treated as stakeholders in system design, not problems to be managed.

Call to action

If you are a litigant in person struggling with process — or you work in legal tech, policy, or court-facing innovation — this is a space where practical collaboration matters.

JSH Law works at the intersection of family justice, legal process, and responsible AI-assisted navigation, with a focus on making systems intelligible for real people (not just professionals).

  • Need help structuring a chronology, bundle, or evidence set?
  • Building LiP-centred tools and want practitioner input?
  • Want a repeatable workflow that improves compliance and reduces stress?

Get in touch via the contact page

Regulatory & Editorial Notice (JSH Law)
This article is published for general information and public legal education. It is not legal advice and should not be relied upon as such. Laws, procedural rules, guidance and practice may change. Where this article refers to third-party materials, organisations, or public-interest issues, those references are informational and do not imply endorsement. If you need advice on your specific circumstances, you should obtain independent legal advice from a regulated professional or appropriate support service.

Permission Refused? Using AI to decide what to do next — and when to stop

Judicial Review & AI – Part 8 (Final)


Introduction: the hardest moment in Judicial Review

For many litigants in person, this is the moment that hurts the most.

You have:

  • identified a procedural failure,
  • organised your evidence,
  • complied with the Pre-Action Protocol,
  • issued proceedings,
  • met deadlines,
  • followed the rules.

And then the letter arrives.

Permission refused.

Often with:

  • short reasons,
  • no hearing,
  • and no sense of closure.

At this point, the most important skill is judgment — not persistence.

This final article explains:

  • what a refusal of permission actually means,
  • what realistic options exist next,
  • how AI can help you make rational decisions, not emotional ones,
  • and how to recognise when stopping is the strongest legal move.

What a refusal of permission really means (legally)

At the permission stage, the Administrative Court is not saying:

“You are wrong.”

It is saying:

“This is not a case the High Court should hear.”

That distinction matters.

Permission may be refused because:

  • the claim is not arguable,
  • an alternative remedy exists,
  • the issue is not suitable for Judicial Review,
  • delay is fatal,
  • the grounds are merits-based,
  • or the case is disproportionate.

Some refusals are about substance.
Many are about jurisdiction and restraint.

Understanding which matters.


The court’s institutional position on stopping JR claims

The High Court is deeply conscious of:

  • finality,
  • judicial economy,
  • and the danger of endless litigation.

This is why:

  • permission is filtered on the papers,
  • oral renewals are tightly controlled,
  • repeated applications are discouraged.

Judicial Review is not designed to be:

  • iterative,
  • escalatory,
  • or relentless.

It is designed to be exceptional.


The three lawful options after permission is refused

After refusal, litigants in person usually face three choices:

  1. Seek an oral renewal
  2. Reframe or abandon the JR
  3. Stop — and redirect energy elsewhere

AI can help you evaluate each — but cannot make the decision for you.


Option 1: Oral renewal — when is it justified?

You may request an oral renewal hearing if permission is refused on the papers.

This is not a second bite at the cherry in the ordinary sense.

The court will only engage if:

  • there is a clear error in the refusal reasoning,
  • something material was misunderstood,
  • or the issue was not adequately addressed on the papers.

Oral renewals are not an opportunity to:

  • restate arguments,
  • add new evidence (without permission),
  • re-argue the merits.

How AI helps evaluate oral renewal prospects

AI can assist by:

  • analysing the refusal reasons,
  • comparing them to your grounds,
  • identifying whether the judge addressed the correct issue,
  • flagging whether the refusal turns on:
    • jurisdiction,
    • alternative remedy,
    • or merits drift.

If the refusal is:

  • clearly jurisdictional,
  • clearly about suitability,
  • or clearly about restraint,

an oral renewal is usually not worth pursuing.

AI helps remove hope-based decision-making.


Option 2: Reframing — when JR was the wrong tool

Sometimes permission is refused because:

  • the legal issue exists,
  • but Judicial Review was the wrong vehicle.

Common examples:

  • the issue belongs in an appeal,
  • a complaint route exists,
  • another statutory remedy is available,
  • the problem is systemic but non-justiciable.

This does not mean:

  • you imagined the problem,
  • or the process was flawless.

It means the High Court is not the forum.


How AI helps here

AI can help you:

  • map refusal reasons against alternative routes,
  • identify whether:
    • an appeal can still be pursued,
    • a renewed application is possible,
    • or a non-litigious route exists.

This is strategic redirection, not surrender.


Option 3: Stopping — why this is often the strongest move

Stopping is not failure.

In fact, one of the marks of legal maturity is knowing when a remedy is exhausted.

Continuing after:

  • a clear jurisdictional refusal,
  • no procedural error in the refusal,
  • and no viable alternative framing

often leads to:

  • wasted resources,
  • escalating stress,
  • and reputational damage.

Courts do notice persistence without discipline.


The ethical dimension: AI should reduce harm, not fuel obsession

This is where Law + AI intersects with ethics.

AI can:

  • generate arguments endlessly,
  • suggest variations,
  • keep litigation alive indefinitely.

That does not mean it should.

Responsible AI use means:

  • stopping when law stops,
  • resisting sunk-cost fallacy,
  • recognising diminishing returns.

You are still responsible for decisions.

AI should support clarity, not compulsion.


Common emotional traps after permission refusal

Litigants in person often fall into predictable patterns:

  • “The judge didn’t understand — I just need to explain again.”
  • “If I phrase it differently, it will work.”
  • “Someone must eventually listen.”

These reactions are human — but legally dangerous.

Judicial Review is not persuasion-by-volume.

AI is most valuable when it interrupts emotional escalation, not amplifies it.


Using AI to perform a “JR exit review”

One of the most powerful uses of AI at this stage is a structured exit review.

Questions AI can help you answer:

  • What exactly was refused?
  • On what basis?
  • Is there any legal error in the refusal itself?
  • Is an oral renewal proportionate?
  • What alternative routes exist?
  • What are the costs (financial and emotional) of continuing?

This turns a painful moment into a controlled conclusion.


The reputational aspect litigants rarely consider

Courts are institutional actors.

Repeated:

  • unmeritorious renewals,
  • disproportionate applications,
  • or refusal to accept finality

can affect how future applications are perceived.

Stopping at the right moment preserves:

  • credibility,
  • energy,
  • and future options.

AI can help you see this before damage occurs.


The role of court administration after refusal

Once permission is refused, court interaction typically returns to:

  • administrative closure,
  • compliance with directions,
  • and finality processes operated under HMCTS.

At this stage, clarity matters more than persistence.


What success looks like at the end of a JR journey

Success is not always:

  • permission granted,
  • or a quashing order.

Sometimes success is:

  • forcing a decision via the PAP stage,
  • clarifying the legal position,
  • stopping an unlawful delay,
  • or confirming that JR is not the route.

That knowledge is not wasted.

It is hard-earned legal clarity.


Key Takeaways (for litigants in person)

  • Permission refusal is a jurisdictional decision, not a moral judgment.
  • Oral renewals are narrow and rarely succeed.
  • Reframing is sometimes appropriate; repeating usually is not.
  • Stopping at the right time is a mark of legal strength.
  • AI should be used to:
    • evaluate realistically,
    • reduce emotional escalation,
    • and support principled decisions.
  • Endless litigation is not access to justice.

Judicial Review is exceptional — and knowing when it ends is part of using it lawfully.


Closing the series: what this resource is for

This eight-part series was designed to:

  • demystify Judicial Review,
  • protect litigants in person from procedural harm,
  • show how AI can be used responsibly and ethically,
  • and restore control in situations that often feel powerless.

AI does not replace law.
Law does not bend to persistence.
But clarity — properly supported — restores agency.


Call to Action

If you are:

  • facing a permission refusal,
  • unsure whether to pursue an oral renewal,
  • or need help deciding whether to stop,

You may wish to seek structured, realistic support before taking any further step.


Regulatory & Editorial Notice (JSH Law)

This article is provided for general information only and does not constitute legal advice.

Judicial Review is discretionary, time-limited, and subject to strict procedural controls.
Permission refusal often represents the lawful end of the process.

Readers should seek independent legal advice where appropriate before pursuing further litigation.

Managing Deadlines, Bundles, and Compliance with AI – Procedural discipline in Judicial Review (where cases are really lost)

Judicial Review & AI – Part 7


Introduction: most Judicial Review cases fail quietly

When Judicial Review claims fail, it is rarely dramatic.

There is no cross-examination.
No damning judgment.
No public vindication or condemnation.

Instead, the claim simply:

  • times out,
  • breaches a rule,
  • fails to comply with a direction,
  • or collapses under procedural non-compliance.

For litigants in person, this is often devastating — not because the issue lacked merit, but because process defeated substance.

This article explains:

  • why procedural discipline is critical in Judicial Review,
  • how deadlines and compliance operate in practice,
  • how AI can be used to prevent procedural failure,
  • and how to avoid the common traps that quietly end claims.

Judicial Review is procedural law, not just public law

Judicial Review sits at the intersection of:

  • public law principles, and
  • strict civil procedure.

It is governed by:

  • CPR Part 54,
  • the Administrative Court Practice Directions,
  • and specific court directions once proceedings are issued.

The High Court expects near-perfect compliance.

Latitude for litigants in person exists — but it is limited.

Courts will not:

  • extend time automatically,
  • rewrite non-compliant documents,
  • excuse repeated procedural failures.

This is why AI, used properly, can be invaluable — not as a strategist, but as a discipline enforcer.


The three procedural pressure points in Judicial Review

Judicial Review claims typically fail at one of three procedural stages:

  1. Time limits
  2. Bundles
  3. Compliance with directions

Each is unforgiving.
Each is manageable — with the right systems.


1. Time limits: the guillotine that does not move

Judicial Review claims must be brought:

  • promptly, and
  • in any event within three months of the decision or failure challenged.

This is not flexible.

Even a strong claim can be refused solely for delay.

Courts repeatedly emphasise this because:

  • delay undermines legal certainty,
  • public bodies must be able to rely on decisions.

Litigants in person often underestimate how quickly time runs — especially where silence or inaction is involved.


Where AI helps with time limits

AI can assist by:

  • calculating elapsed time from key dates,
  • flagging approaching deadlines,
  • distinguishing between:
    • continuing failures, and
    • single decisions with ongoing effects.

However, AI cannot decide when time starts to run.

You must determine:

  • the operative date,
  • whether there is a continuing duty,
  • whether delay is justifiable.

AI helps you see — it does not excuse lateness.


2. Bundles: why presentation equals credibility

Judicial Review is decided largely on the papers.

Judges expect:

  • clean,
  • paginated,
  • indexed bundles,
  • with only relevant material included.

A poor bundle signals:

  • lack of focus,
  • lack of seriousness,
  • lack of procedural understanding.

This affects outcomes — even subconsciously.


What courts expect from JR bundles

A compliant bundle typically includes:

  • the claim form,
  • statement of facts and grounds,
  • evidence (exhibits),
  • relevant correspondence,
  • any court directions.

It must be:

  • logically ordered,
  • consistently paginated,
  • clearly indexed.

Courts will not tolerate:

  • sprawling appendices,
  • duplicated documents,
  • emotional exhibits,
  • unexplained screenshots.

How AI helps with bundles (and where it must stop)

AI is excellent at:

  • ordering documents,
  • checking pagination consistency,
  • generating draft indices,
  • identifying duplicates.

AI must not:

  • decide what is legally relevant,
  • exclude documents without review,
  • alter originals.

Think of AI as your bundle manager, not your legal editor.


3. Compliance with directions: the silent killer

Once proceedings are issued, the court will issue directions.

These may include:

  • deadlines for acknowledgements of service,
  • limits on evidence,
  • formatting requirements,
  • page limits.

Failure to comply is taken seriously.

Courts expect:

  • directions to be read carefully,
  • complied with precisely,
  • or varied formally if impossible.

“I didn’t understand” is rarely enough.


Where AI adds value here

AI can:

  • summarise court directions,
  • convert them into task lists,
  • flag inconsistencies,
  • track compliance status.

This is one of the safest and most valuable uses of AI.

What AI must not do:

  • interpret directions creatively,
  • assume flexibility,
  • replace careful reading.

The role of court administration and compliance reality

Judicial Review cases often involve interaction with court systems operated under HMCTS.

This adds complexity:

  • electronic filing systems,
  • automated acknowledgements,
  • varying administrative practices.

AI can help track:

  • what has been submitted,
  • what has been acknowledged,
  • what remains outstanding.

But responsibility remains yours.


Common procedural failures litigants in person make

Judicial Review claims often fail because:

  • documents are filed late,
  • bundles exceed page limits,
  • directions are misunderstood,
  • amendments are made without permission,
  • informal correspondence replaces formal steps.

These failures are rarely cured.

AI helps by enforcing checklists, not by improvising.


Procedural discipline vs flexibility: the court’s view

Courts balance:

  • access to justice,
  • against fairness to public bodies,
  • and efficient use of court resources.

Litigants in person are not expected to be perfect — but they are expected to be organised and serious.

Repeated non-compliance erodes goodwill rapidly.

AI, used properly, helps demonstrate:

  • respect for the process,
  • reliability,
  • proportionality.

Using AI as a procedural “second pair of eyes”

One of the best uses of AI is review, not drafting.

Examples:

  • “Have I complied with every direction?”
  • “Are there any inconsistencies in dates or pagination?”
  • “Is anything missing that the court expects?”

AI excels at spotting patterns and omissions.

It should be used before, not after, filing.


What AI must never be used to do procedurally

AI must not:

  • decide to ignore directions,
  • guess court expectations,
  • file documents autonomously,
  • substitute legal judgment.

Courts expect human responsibility.

AI is invisible to them — your compliance is not.


Key Takeaways (for litigants in person)

  • Judicial Review claims often fail on procedure, not law.
  • Time limits are unforgiving.
  • Bundles signal credibility.
  • Directions must be complied with precisely.
  • AI is most useful as a:
    • deadline tracker,
    • bundle organiser,
    • compliance checker.
  • AI does not excuse lateness or non-compliance.

Procedural discipline is not optional — it is the case.


Preparing for the final stage

After permission decisions, litigants face:

  • permission refusal,
  • conditional grants,
  • or limited permission.

The final article in this series addresses:

  • how to respond rationally,
  • how to assess next steps,
  • and how AI can help avoid throwing good money after bad.

Call to Action

If you are:

  • struggling to manage Judicial Review deadlines,
  • concerned about bundle compliance,
  • or unsure how to interpret court directions,

You may wish to seek structured support before procedural errors become irreversible.


Regulatory & Editorial Notice (JSH Law)

This article is provided for general information only and does not constitute legal advice.

Judicial Review proceedings are governed by strict procedural rules and judicial discretion.
Failure to comply with time limits, directions, or bundle requirements may result in refusal of permission or dismissal of the claim.

Readers should obtain independent legal advice where appropriate.

From Pre-Action Protocol to Permission – Structuring Judicial Review grounds with AI — and avoiding merits traps

Judicial Review & AI – Part 6


Introduction: permission is the real battlefield

Most Judicial Review claims never reach a full hearing.

They fail — quietly and decisively — at the permission stage.

For litigants in person, this can feel bewildering. Everything may feel unfair. The process may have stalled. Appeals may have been ignored. And yet the court refuses permission in a few short paragraphs.

The reason is usually not lack of injustice.

It is poor framing.

This article explains:

  • what the permission stage is actually testing,
  • how Judicial Review grounds must be structured,
  • why merits-based arguments are fatal,
  • and how AI can help enforce discipline, not inflate claims.

What the permission stage is for (in reality)

Under CPR Part 54, the Administrative Court must decide whether a claim is:

  1. Arguable, and
  2. Suitable for Judicial Review.

This is not a mini-trial.
It is a filtering exercise.

Judges are asking:

  • Is this a genuine public-law issue?
  • Is there an alternative remedy?
  • Is the claim focused and lawful?
  • Is it proportionate for the High Court?

If the answer to any of these is “no”, permission is refused.


Why litigants in person struggle most at this stage

Litigants in person often:

  • understand the facts deeply,
  • experience the injustice personally,
  • know exactly what feels wrong.

But Judicial Review does not operate on feelings.

It operates on:

  • duties,
  • legality,
  • jurisdiction,
  • restraint.

The hardest shift is moving from:

“This decision was wrong”
to
“This decision-making process was unlawful.”

AI can help enforce that shift — if used correctly.


The structure of Judicial Review grounds (what the court expects)

Judicial Review grounds are not free-form.

They are expected to follow a disciplined structure:

  1. The decision (or failure) challenged
  2. The legal duty or power
  3. The public-law ground
  4. How the duty was breached
  5. Why Judicial Review is appropriate
  6. The remedy sought

If any of these are missing or muddled, permission is at risk.


Ground 1: identifying the correct target

Your grounds must clearly identify:

  • what is being challenged,
  • when it occurred,
  • who is responsible.

This may be:

  • a refusal,
  • a failure to determine,
  • a procedural decision,
  • or a constructive refusal.

Vague formulations (“the court has ignored me”) almost always fail.

AI can assist by:

  • enforcing specificity,
  • flagging ambiguity,
  • aligning grounds with your chronology.

Ground 2: identifying the legal duty

This is where many claims collapse.

Judicial Review requires:

  • a legal duty,
  • not just a power,
  • and not just an expectation.

The question is:

Was the public body required by law to act — and did it fail to do so lawfully?

Without a duty, there is no unlawfulness.

AI can help:

  • check whether you are assuming a duty,
  • flag where a duty needs to be evidenced,
  • prevent overstatement.

But you must verify the law.


Ground 3: choosing the correct public-law ground

Most JR claims rely on one (sometimes two) grounds:

Illegality

The decision-maker:

  • misunderstood the law,
  • failed to exercise a required power,
  • or acted outside jurisdiction.

Procedural unfairness

The process was unfair because:

  • no reasons were given where required,
  • no opportunity to be heard was provided,
  • mandatory procedure was not followed.

Irrationality

A very high threshold — rarely appropriate for litigants in person.

AI can help prevent the common mistake of:

  • pleading all grounds “just in case”.

Courts view that as lack of focus.


The single biggest mistake: merits drift

Merits drift occurs when:

  • arguments about fairness,
  • disagreement with reasoning,
  • or dissatisfaction with outcomes

creep into what should be a process challenge.

Examples of merits drift:

  • arguing evidence should have been weighed differently,
  • asserting bias without procedural basis,
  • challenging findings of fact.

These are appeal issues — not Judicial Review issues.

AI is particularly useful here:

  • it can flag evaluative language,
  • identify opinion-based phrasing,
  • and force re-framing into procedural terms.

Keeping law and fact separate (critical discipline)

Judicial Review requires:

  • facts to be stated neutrally,
  • law to be applied to those facts,
  • not blended together.

A common error is embedding argument into factual narrative.

AI can help by:

  • separating factual chronology from legal analysis,
  • highlighting where language crosses the line,
  • enforcing neutral drafting.

This separation builds judicial trust.


Alternative remedy: the silent killer of JR claims

Even where unlawfulness exists, Judicial Review may still fail if:

  • an appeal route exists,
  • or another adequate remedy is available.

Courts are firm on this.

You must:

  • identify the appeal route,
  • explain whether it exists in reality,
  • and justify why JR is still appropriate.

This is where litigants in person often underestimate the burden.

AI can help:

  • structure this justification,
  • but cannot invent a lack of remedy where one exists.

Remedy: what you can (and cannot) ask for

Judicial Review remedies are limited.

You may ask for:

  • a decision to be quashed,
  • a matter to be reconsidered lawfully,
  • a duty to be performed.

You cannot ask the High Court to:

  • decide the underlying appeal,
  • substitute its own view of the facts,
  • grant compensation (save in rare cases).

AI can help test whether the remedy sought aligns with JR principles.


How AI should be used at the permission stage

AI is best used as a quality-control tool, not a generator.

Proper uses include:

  • checking internal consistency,
  • identifying merits drift,
  • ensuring each ground maps to evidence,
  • testing whether each ground answers the “so what?” question.

AI should not:

  • expand arguments,
  • multiply grounds,
  • add speculative claims,
  • generate case law without verification.

Permission-stage discipline is about less, not more.


The court’s perspective: what judges scan for first

Judges reviewing permission applications often:

  • skim first,
  • assess focus,
  • test plausibility quickly.

They are alert to:

  • scattergun pleading,
  • emotional language,
  • disproportionate claims.

A tight, restrained set of grounds signals seriousness.


Key Takeaways (for litigants in person)

  • The permission stage is the real test in Judicial Review.
  • Grounds must challenge lawfulness, not outcomes.
  • Identify a legal duty — or the claim fails.
  • Merits drift is the most common fatal error.
  • AI is most useful as a:
    • discipline tool,
    • clarity enforcer,
    • consistency checker.
  • Fewer, stronger grounds beat many weak ones.

If you cannot state your grounds in calm, procedural language, Judicial Review is unlikely to succeed.


Preparing for the final stages

If permission is granted, the case moves into:

  • full pleadings,
  • possible disclosure,
  • and substantive hearing.

But many litigants will face:

  • permission refusal,
  • or a conditional grant.

The final article in this series addresses that moment — and how to respond rationally.


Call to Action

If you are:

  • preparing Judicial Review grounds,
  • unsure whether your case has drifted into merits,
  • or worried about permission-stage refusal,

You may wish to seek structured support before issuing proceedings.

Regulatory & Editorial Notice (JSH Law)

This article is provided for general information only and does not constitute legal advice.

Judicial Review claims are subject to strict procedural requirements and judicial discretion.
Improperly framed grounds may result in refusal of permission and adverse costs consequences.

Readers should seek independent legal advice where appropriate.

Drafting a Pre-Action Protocol Letter with AI Support – Applying lawful pressure before Judicial Review proceedings

Judicial Review & AI – Part 5


Introduction: most Judicial Review cases should never be issued

This may sound counterintuitive, but it is true:

A well-drafted Pre-Action Protocol letter is often more powerful than a Judicial Review claim itself.

For litigants in person, the Pre-Action Protocol (PAP) stage is frequently misunderstood. Some see it as a formality. Others treat it as an emotional complaint.

Both approaches are mistakes.

In Judicial Review, the PAP letter is:

  • a legal warning shot,
  • a compliance test,
  • and a credibility filter.

This article explains:

  • what the Pre-Action Protocol is for,
  • what the court expects from it,
  • how AI can assist without undermining trust,
  • and how to draft a PAP letter that actually changes behaviour.

The legal status of the Pre-Action Protocol in Judicial Review

Judicial Review claims are governed by CPR Part 54 and the Judicial Review Pre-Action Protocol.

Compliance is not optional.

Before issuing proceedings, a claimant is expected to:

  • identify the decision or failure challenged,
  • set out the legal basis of the claim,
  • state the remedy sought,
  • give the proposed defendant a reasonable opportunity to respond.

Failure to comply can result in:

  • refusal of permission,
  • adverse costs consequences,
  • or the court questioning the claimant’s credibility.

For litigants in person, courts will allow some latitude — but not a complete absence of discipline.


What the PAP stage is actually testing

The PAP stage tests four things:

  1. Clarity
    Can you identify the public-law issue precisely?
  2. Legality
    Are you challenging lawfulness, not outcomes?
  3. Proportionality
    Are you seeking a realistic remedy?
  4. Seriousness
    Do you understand the gravity of Judicial Review?

AI can help with all four — if used properly.


What a Judicial Review PAP letter is not

A PAP letter is not:

  • a complaint,
  • a witness statement,
  • a narrative of injustice,
  • a threat-filled ultimatum,
  • a re-argument of the merits.

Letters that read like grievances are often ignored — or responded to defensively.

Judicial Review requires cool precision.


The anatomy of an effective JR Pre-Action Protocol letter

A proper PAP letter has a predictable structure. Courts expect it.

1. Identification of the claimant and proposed defendant

This must be precise.

The letter should clearly identify:

  • who is bringing the claim,
  • which public body is responsible,
  • whether the issue lies with:
    • a court,
    • court administration,
    • or systems operating under HMCTS.

AI can help ensure consistency — but you must choose the correct defendant.


2. The decision or failure being challenged

This is the most important section.

You must state:

  • whether you are challenging:
    • a decision,
    • a refusal,
    • or a failure to act,
  • the date (or period) of that decision or failure,
  • how it arose procedurally.

Vague statements like “my appeal has been ignored” are not sufficient.

AI is useful here to:

  • extract precise dates,
  • strip out emotive language,
  • enforce specificity.

3. The factual background (short and neutral)

This section should:

  • summarise the relevant chronology,
  • refer to documents,
  • avoid argument.

It is not the place for case law or submissions.

AI can help condense longer timelines into a tight factual summary — but it must be reviewed carefully for accuracy.


4. The legal basis of the claim

This is where discipline matters.

You must identify:

  • the public-law ground relied upon:
    • illegality,
    • procedural unfairness,
    • irrationality,
  • and the duty said to have been breached.

You do not need to cite every case.

Over-citation is often counterproductive.

AI can help:

  • ensure the correct ground is identified,
  • prevent drift into merits-based argument,
  • maintain a judicial tone.

5. The remedy sought

This must be realistic and lawful.

Common remedies include:

  • determination of an appeal,
  • reconsideration in accordance with law,
  • provision of reasons,
  • ending an unlawful delay.

You are not asking the court to decide the underlying case.

AI can help test whether the remedy aligns with Judicial Review principles.


6. Timeframe for response

The Protocol suggests 14 days in most cases.

Shorter periods may be justified where:

  • delay is ongoing,
  • rights are being prejudiced.

AI can help flag proportionality risks here.


7. Warning of intended proceedings (without aggression)

The letter should state calmly that:

  • Judicial Review proceedings will be issued if the issue is not resolved,
  • subject to the response received.

Threatening language weakens credibility.


Tone: why neutrality wins

Judicial Review correspondence is often read by:

  • government lawyers,
  • legal advisers,
  • senior officials.

They are trained to assess risk.

A neutral, legally framed PAP letter signals:

  • seriousness,
  • competence,
  • procedural awareness.

AI can help remove:

  • emotional phrasing,
  • accusatory language,
  • rhetorical flourishes.

This is one of its greatest strengths.


Common PAP mistakes litigants in person make

Judicial Review PAP letters often fail because they:

  • argue the merits,
  • accuse judges of bias,
  • demand apologies or compensation,
  • include excessive attachments,
  • misstate the legal basis,
  • threaten media exposure.

AI can help identify and strip these out — if you let it.


How AI should be used in PAP drafting (properly)

AI should be used to:

  • structure the letter,
  • ensure completeness,
  • check tone consistency,
  • cross-reference facts to evidence,
  • flag missing elements.

AI should not:

  • invent legal duties,
  • escalate tone,
  • add speculative arguments,
  • generate case law without verification.

The final letter must always be human-approved.


What happens after the PAP letter is sent

Three things usually happen:

  1. The issue is resolved
    The appeal is listed, reasons are given, or delay ends.
  2. A reasoned refusal is issued
    This clarifies whether JR is viable.
  3. No adequate response
    This strengthens the JR claim.

AI can assist in analysing the response — but it cannot decide next steps for you.


Why courts care about PAP compliance

At the permission stage, judges often ask:

  • Was the issue raised properly?
  • Was the public body given a chance to respond?
  • Was litigation proportionate?

A good PAP letter answers these questions before they are asked.

A poor one raises doubts immediately.


Key Takeaways (for litigants in person)

  • The Pre-Action Protocol stage is substantive, not procedural.
  • Most JR cases should resolve here.
  • A PAP letter must challenge lawfulness, not fairness.
  • Tone matters as much as content.
  • AI is most valuable for:
    • structure,
    • neutrality,
    • consistency,
    • error prevention.
  • A strong PAP letter often determines the outcome before court.

If you cannot clearly articulate the public-law failure in a PAP letter, Judicial Review is unlikely to succeed.


Preparing for the next step

If the PAP stage does not resolve matters, the next step is:

  • issuing Judicial Review proceedings,
  • drafting Statement of Facts and Grounds,
  • and preparing for the permission stage.

That process is unforgiving.

AI can help — but only if everything so far has been done properly.


Call to Action

If you are considering Judicial Review and want help:

  • drafting a compliant Pre-Action Protocol letter,
  • ensuring your case is framed correctly,
  • or understanding whether proceedings are proportionate,

You may wish to seek structured support before issuing any claim.


Regulatory & Editorial Notice (JSH Law)

This article is provided for general information only and does not constitute legal advice.

Judicial Review proceedings are governed by strict procedural rules.
Failure to comply with the Pre-Action Protocol may result in refusal of permission or adverse costs consequences.

Readers should obtain independent legal advice where appropriate.