Posts

The “Vibe Lawyer” Moment: AI, Litigants in Person, and the Coming Shockwave for the Family Courts

Litigants in person are being called “vibe lawyers” for using AI to draft complaints and court documents. But behind the headlines lies a harder truth: people are turning to artificial intelligence because they cannot afford representation in an increasingly complex and overstretched justice system. Judges are right to be concerned about fake citations and procedural errors. Yet dismissing AI use outright misses the deeper issue — access to justice has been under strain for years, and technology is now filling the gap.

The “Vibe Lawyer” Moment: AI, Litigants in Person, and the Coming Shockwave for the Family Courts

By Jessica Susan Hill | JSH Law

Key Takeaways (Read This First)

  • AI is already changing litigation behaviour — the judiciary is explicitly preparing for a surge in AI-generated claims across civil, family and tribunals.
  • The risk isn’t “AI” — it’s unverified AI: fabricated authorities and confidently wrong submissions waste court time and damage credibility.
  • LiPs are not “wreaking havoc” for fun. Many are doing what they must to participate in a system they cannot afford to navigate with representation.
  • The solution is guardrails, not barriers: verification standards, procedural literacy, and responsible workflows that help the court as well as the litigant.
  • Family proceedings are high-stakes. Used properly, AI can improve clarity and evidence organisation; used badly, it can derail safeguarding analysis and case management.

1. Why this matters now

“Vibe lawyers” is a catchy label, but it risks obscuring a far more serious reality: litigants in person are using AI tools to draft complaints, defences, witness statements and skeleton arguments at scale — and the courts are already feeling the impact. The phenomenon is now so visible that Sir Geoffrey Vos (Master of the Rolls, Head of Civil Justice) has explicitly warned that the judiciary must prepare for an “AI revolution” that may vastly increase the number of civil, family and tribunal claims the justice system must manage. His speech is worth reading in full. :contentReference[oaicite:0]{index=0}

Let’s be direct: the justice system in England and Wales is already stretched. Many court users already experience the process as opaque, intimidating and unaffordable. That is not a personal failing of litigants — it is a structural reality. AI is entering a pressure-cooker and magnifying what was already there: information asymmetry, procedural complexity, delay and the gulf between a represented party and an unrepresented one.

So, yes — judges and practitioners are right to be concerned about inaccurate AI-generated material clogging lists and adding burden to judges who are already firefighting. But it is also true that, in the medium term, AI could become one of the most significant access-to-justice tools we have ever seen. Both truths can exist at once.

2. The judiciary is not guessing — it is responding to lived reality

We are past the point of theoretical debate. The judiciary has been issuing speeches and guidance precisely because AI use is now operationally relevant. Beyond speeches, the Judicial Office has published updated guidance addressing risks including confidentiality, bias and “hallucinations” — where AI produces plausible but incorrect information. The October 2025 judicial guidance explicitly flags the danger of fictitious citations and misleading legal content. :contentReference[oaicite:1]{index=1}

Sir Geoffrey Vos has also repeatedly articulated a simple “core rules” approach: understand what the tool is doing, do not upload private/confidential data into public tools, and check the output before using it for any purpose. He set that out again in October 2025. :contentReference[oaicite:2]{index=2}

This is not anti-technology. It is the judiciary doing what it should do: protecting the integrity of the process while acknowledging that new tools are changing behaviour.

3. The real problem: “confidently wrong” submissions

Generative AI tools can draft impressive text quickly. But they do not “know” the law. They predict language. That difference matters profoundly in litigation. A well-written paragraph that contains an invented case, a misquoted statute or an inaccurate procedural route is not merely unhelpful — it can actively undermine a party’s credibility and force the court to spend additional time cleaning up the mess.

The legal profession has already seen what happens when verification fails. In June 2025, the Divisional Court (Dame Victoria Sharp P and Johnson J) dealt with the now widely-reported “fake authorities” problem in Ayinde v London Borough of Haringey and Al-Haroun v Qatar National Bank, where false citations and inaccurate quotations were placed before the court, with suspected or admitted use of AI tools without proper checks. The judgment is publicly available and makes required reading for anyone tempted to treat AI output as “good enough”. :contentReference[oaicite:3]{index=3}

Importantly, that judgment is aimed at lawyers — because professionals are held to professional standards. But the underlying point applies to everyone: accuracy is non-negotiable in court work. You can be passionate, traumatised, exhausted, and still required to file documents that are factually and legally sound.

4. Why litigants in person are using AI (and why the “money pit” narrative is wrong)

Many litigants in person feel they are treated as an administrative inconvenience — or worse, as a “cost centre” rather than a rights-holder. I understand why that perception forms. The system can be brutal: forms, deadlines, practice directions, directions hearings, orders you must interpret and comply with under stress. In private law children proceedings, you may be trying to protect a child, manage safeguarding concerns, and preserve your own mental stability while preparing documents that lawyers train for years to produce.

For a growing number of people, AI has become the first accessible “translator” of legal language. It can explain terminology, propose a structure for a statement, generate headings for a skeleton argument, and help a person who feels overwhelmed take a first step. That is why it feels like a shake-up. It is not because LiPs are trying to harm the system. It is because they are trying to participate in it.

And here is the hard truth: if access to representation continues to shrink in practice — whether by cost, availability, or scope — more people will use AI. That is not something a press headline can reverse. It is a reality the system must incorporate.

5. Family court is the pressure point

Family proceedings are where AI misuse can become most dangerous, because the stakes are often immediate and human: the child’s living arrangements, contact, safeguarding, allegations of domestic abuse, coercive control, substance misuse, mental health, relocation, schooling — the list is endless.

Private law children cases are ultimately governed by the welfare principle in the Children Act 1989, section 1. The court’s job is not to reward the best writer. It is to determine what best meets the child’s welfare needs. But poor drafting can still distort the court’s understanding of what matters. :contentReference[oaicite:4]{index=4}

And family procedure is its own ecosystem. The Family Procedure Rules and associated Practice Directions are not optional reading; they are the architecture of how your case moves through the system. PD12J (domestic abuse and harm) is particularly critical where abuse is alleged, because it shapes fact-finding decisions, safeguarding analysis and protective measures. :contentReference[oaicite:5]{index=5}

Where AI is used badly in family court, I commonly see the same patterns (and judges see them too):

  • Misstating legal tests (e.g., confusing civil and criminal standards, or quoting the wrong threshold framework).
  • Over-inclusion: 30-page narratives where only a small percentage is evidentially relevant.
  • Inflammatory language that escalates conflict rather than centring the child.
  • Procedural fantasy: “applications” and “orders” that do not exist or are not procedurally available.
  • Fake authority: citations that sound real but are not verifiable.

Those problems do not just “waste time”. They can change outcomes. They can harden judicial perceptions. They can reduce a litigant’s credibility. And in safeguarding contexts, credibility matters.

6. But here is the opportunity: structured AI use can help the court

Now for the other side of the ledger, which the “vibe lawyer” framing often ignores.

Used properly, AI can reduce noise and increase clarity. It can help an overwhelmed litigant present their case in a way that judges can actually work with. It can support:

  • Chronology building (dates, events, orders, and key turning points).
  • Document organisation (indexes, exhibit lists, consistent naming).
  • Issue framing (what is the dispute actually about?).
  • Drafting clarity (headings, structure, neutral tone).
  • Summarising communications (WhatsApp/SMS/email) into court-usable bundles.

Those are not cosmetic benefits. They are directly aligned with what the court needs: efficient case management, focused evidence, and parties who can articulate relevant issues.

In other words: the best version of AI in litigation is not “AI replaces lawyers.” It is “AI helps people present usable material so the court can do its job.” That is the access-to-justice promise.

7. The non-negotiable: verification

The line between empowerment and chaos is verification.

Professional regulators have been clear that AI cannot be trusted to judge its own accuracy. The SRA has warned about hallucinations and the risk of plausible but incorrect outputs, including non-existent cases. :contentReference[oaicite:6]{index=6}

For court users, this translates into a simple operating standard:

  • If you cite it, you must be able to prove it exists (case name, neutral citation, and a reliable source).
  • If you quote a statute, check it on legislation.gov.uk (not in an AI chat box).
  • If you refer to rules or practice directions, check the official source (FPR/CPR/PD pages).
  • If it sounds “too perfect”, slow down — AI is very good at confidence, not always good at truth.

After the June 2025 “fake authorities” judgment, the direction of travel is obvious: courts will increasingly treat fabricated or careless citations as serious misconduct where professionals are involved, and as a significant credibility issue where litigants are involved. :contentReference[oaicite:7]{index=7}

8. A real-world cautionary tale: Mata v Avianca

Even outside the UK, courts have reacted strongly when lawyers filed AI-generated fake authorities. The widely-cited US case Mata v Avianca resulted in sanctions after fabricated case citations were submitted. It is not “UK law”, but it is a stark illustration of what happens when verification collapses. :contentReference[oaicite:8]{index=8}

Why mention it here? Because the underlying professional lesson travels: courts do not have time for invented law, and they should not have to spend scarce judicial time correcting avoidable errors.

9. What this means for litigants in person

What This Means for LiPs (Practical Guidance)

1) Use AI to organise, not to “source” law. AI is excellent for structure, headings, summaries, chronologies and drafting tone. It is unreliable as a sole source of legal authority.

2) Keep it child-focused (family cases). Remove insult, speculation and “character assassination”. Judges need facts, evidence, and impact on the child.

3) Treat every AI output as a draft. You are responsible for what you file. Read it. Edit it. Make sure it matches your evidence.

4) Verify every citation. If you cannot open the case or locate it on a reputable database, do not rely on it.

5) Don’t upload confidential material into public AI tools. Safeguarding details and private communications should be handled carefully. Follow the Judicial Office warnings on confidentiality. :contentReference[oaicite:9]{index=9}

6) Aim for shorter, clearer documents. Judges do not reward length. They reward relevance. A focused 6–10 pages often lands better than a sprawling 30.

7) If you’re stuck, get human oversight. A short consultation to sanity-check structure, compliance with directions, and relevance can prevent months of damage.

10. What this means for the justice system: guardrails, not barriers

If the system responds to AI by “closing ranks” and shaming litigants, it will fail. People will still use AI — but they will do so in worse, more chaotic ways. A better approach is to develop common standards that increase quality and reduce burden.

In practice, that means three things.

A) Judicial clarity

Courts and judiciary leadership can help by setting clear expectations about what is acceptable in written submissions — particularly around citation verification and disclosure of AI use where relevant. The Judicial Office guidance is already laying the foundation here. :contentReference[oaicite:10]{index=10}

B) Procedural literacy for court users

Most problems I see are not “bad people”. They are overwhelmed people. The system needs short, accessible, official pathways explaining (for example) what a directions hearing is, how to comply with an order, how to prepare a bundle, and how to draft a witness statement that is relevant rather than reactive.

C) Responsible support models

This is where the best “shake up” lies: hybrid support that uses AI to accelerate organisation and drafting, with human oversight to ensure compliance, accuracy, relevance and tone. That model benefits everyone: the litigant, the other party, and the court.

11. A note on professional standards (and why it still matters to LiPs)

When professionals file inaccurate material, the consequences can be severe, including regulatory referral. That was made explicit in the June 2025 judgment dealing with false citations. :contentReference[oaicite:11]{index=11}

LiPs are not held to the same professional code — but the practical consequences can still be harsh: credibility erosion, judicial impatience, adverse costs risks in some contexts, and (most importantly) a judge simply not trusting what they are reading. In family court, loss of credibility can be profoundly damaging.

This is why “AI literacy” is not an academic luxury. It is a procedural survival skill.

12. Conclusion: the future is responsible AI, not no AI

AI is in the courtroom ecosystem now. The judiciary is preparing for it. Regulators are warning about it. The profession is adapting to it. The question is not whether litigants in person will use AI — they already are.

The question is whether we will build a culture of responsible use.

Used recklessly, AI produces noise: invented authorities, misunderstood legal tests, and sprawling submissions that burden the court. Used properly, it can produce clarity: structured chronologies, coherent statements, and focused issues that help the court get to the real substance of the case.

If we care about access to justice, we cannot treat litigants in person as an administrative irritation. We should treat them as court users with rights and responsibilities — and we should equip them with tools and guardrails that allow them to participate meaningfully.

That is the “AI revolution” that matters: not chaos, but capability.


Useful Official Resources

If you want structured, responsible help using AI to prepare court documents (without risking accuracy or credibility), you can book a short consultation below:


Regulatory & Editorial Notice (JSH Law): This article is published for general information and public-interest commentary only. It does not constitute legal advice and should not be relied upon as such. Where this article refers to third-party sources (including court judgments, guidance, regulator publications, media reporting, or external organisations), those references are provided for context and convenience; JSH Law does not control or endorse third-party content and cannot guarantee its accuracy, completeness, or continued availability. Court users should always consult the original primary sources (including the Family Procedure Rules, Practice Directions, and judgments) and obtain appropriate professional advice for their specific circumstances.

Remote Hearings in Family Court (UK): What to Expect and How to Prepare

Remote hearings have become a permanent feature of the Family Court in England and Wales, not merely a temporary fix from the pandemic. Cases are now routinely listed by telephone or video link using secure platforms such as the Cloud Video Platform (CVP) or newer services introduced by HMCTS, and decisions about the mode of hearing are made by the judge based on fairness and access to justice. Remote hearings follow many of the same rules as in-person hearings, but require additional preparation, technology readiness and courtroom etiquette. Understanding how they work and how to prepare is essential for litigants in person.

Remote Hearings in Family Court (UK): What to Expect and How to Prepare

Key Takeaways for Litigants in Person

  • Remote hearings are now a permanent feature of Family Court in England and Wales.
  • They follow the same legal rules as in-person hearings — but require additional technical preparation.
  • You must treat a remote hearing with the same formality and respect as attending court physically.
  • Preparation includes technology checks, privacy safeguards, document readiness and clear communication structure.
  • Poor technical preparation can undermine credibility — evidence readiness still matters.
  • Structure, calm presentation and procedural awareness remain critical in a remote setting.

Introduction: Remote Hearings Are Here to Stay

Remote hearings were accelerated by the COVID-19 pandemic — but they are no longer a temporary measure. The Family Court now routinely lists hearings by telephone or video link where appropriate. Judges determine the mode of hearing based on fairness, practicality and the interests of justice.

For litigants in person, remote hearings can feel both convenient and disorientating. You may be attending from your home, yet participating in a formal judicial process. The setting may feel informal — but the legal consequences are not.

This guide explains how remote hearings work in Family Court, what technology is used, what is expected of you, and how to prepare strategically and professionally.


Official Overview: What Remote Hearings Look Like

The following official-style video provides a helpful overview of how remote court hearings operate in practice:

This video gives visual context for how remote hearings function and what to expect when joining by video.


What Platform Is Used?

Most Family Court remote hearings use:

  • Cloud Video Platform (CVP)
  • Microsoft Teams (in some courts)
  • Telephone conferencing systems

The joining link is usually sent by email in advance. It is your responsibility to check it works.

Guidance from HMCTS is available here:

What to Expect When Joining a Telephone or Video Hearing (GOV.UK)


Are Remote Hearings Legally Different?

No.

The same legal framework applies:

  • Family Procedure Rules 2010
  • Practice Directions (including PD12J and PD27A where relevant)
  • The Children Act 1989 welfare principle (in children cases)

The judge’s powers and expectations remain unchanged.

The only difference is the format of attendance.


When Are Remote Hearings Typically Used?

  • Case Management Hearings
  • Directions Hearings
  • FHDRA hearings
  • Short interim applications
  • Procedural reviews

Fact-finding hearings and final hearings may sometimes still take place remotely, but judges consider complexity, evidence type, and fairness.


Advantages of Remote Hearings

  • No travel costs
  • Reduced time off work
  • Increased listing flexibility
  • Potentially less intimidating environment

Risks of Remote Hearings

  • Technical failures
  • Connectivity interruptions
  • Reduced ability to read courtroom dynamics
  • Distractions in home environments
  • Risk of informal tone creeping in

Preparation neutralises these risks.


Technical Preparation Checklist

Before the Hearing:

  • Test your internet connection.
  • Use a laptop where possible (not just a phone).
  • Charge devices fully.
  • Have a backup device ready.
  • Ensure camera and microphone function.
  • Download required apps in advance.
  • Join the hearing 10–15 minutes early.

Environment Preparation:

  • Quiet room.
  • Neutral background.
  • No interruptions.
  • Phones on silent.
  • Children supervised elsewhere.

Remote Hearing Etiquette

Even though you are at home, you are in court.

  • Dress professionally.
  • Address the judge appropriately (Sir/Madam/Your Honour as applicable).
  • Mute when not speaking.
  • Do not interrupt.
  • Do not record the hearing without permission.

Recording without permission may amount to contempt.


Document Readiness in a Remote Setting

Remote hearings require heightened document awareness.

  • Have the bundle open on screen or printed.
  • Know page numbers in advance.
  • Use bookmarks in PDFs where possible.
  • Prepare a short position statement.
  • Prepare a list of key page references.

In remote hearings, clarity replaces physical presence.


Communication Strategy

When speaking remotely:

  • Speak slowly.
  • Pause before responding.
  • Use page references clearly (“Bundle page 124, paragraph 6”).
  • Avoid talking over others.
  • Keep submissions structured.

Remote platforms amplify confusion. Structure prevents it.


Safeguarding and Privacy

Remote hearings remain confidential.

  • No one else should be in the room unless permitted.
  • No recording or streaming.
  • Ensure no background conversations.

Family proceedings are private.


If Technology Fails

  • Rejoin immediately.
  • Email the court promptly.
  • Have a backup phone number ready.

Judges understand occasional technical issues — but preparation reduces disruption.


Remote Hearings and Credibility

Judges assess credibility even remotely.

  • Eye contact with the camera.
  • Composed tone.
  • Professional setting.
  • Structured responses.

Remote does not mean relaxed standards.


Working With a McKenzie Friend in a Remote Hearing

If supported:

  • Clarify how you will communicate privately (e.g., WhatsApp messages during hearing).
  • Agree speaking boundaries.
  • Ensure the court knows they are present.

Remote coordination requires planning.


After the Hearing

  • Write down key points immediately.
  • Review the order carefully once received.
  • Calendar deadlines.
  • Prepare next steps promptly.

Common Mistakes to Avoid

  • Joining late.
  • Unstable internet.
  • Interrupting.
  • Appearing casual.
  • Being unprepared with documents.
  • Emotional over-speaking.

Remote hearings reward disciplined preparation.


Is a Remote Hearing Fair?

The court must ensure fairness. If you believe remote format prejudices your ability to present your case (e.g., complex evidence or vulnerability concerns), you may raise this with the court in advance.

The judge decides.


Why Remote Hearing Competence Matters

Remote hearings compress time. Judges expect focused submissions.

Disorganisation becomes more visible in digital format.

Technical fluency is now part of courtroom competence.


How JSH Law Supports Remote Hearing Preparation

  • Pre-hearing checklist review.
  • Technology readiness planning.
  • Structured speaking notes.
  • Bundle navigation strategy.
  • Safeguarding awareness integration.

Preparation reduces anxiety.


Book a 15-Minute Consultation


Useful Links


Regulatory & Editorial Notice

This article is provided for general information only and does not constitute legal advice. Each case depends on its own facts and procedural context.

JSH Law provides litigation support services to litigants in person. JSH Law is not a firm of solicitors and does not undertake reserved legal activities.