Posts

The “Vibe Lawyer” Moment: AI, Litigants in Person, and the Coming Shockwave for the Family Courts

Litigants in person are being called “vibe lawyers” for using AI to draft complaints and court documents. But behind the headlines lies a harder truth: people are turning to artificial intelligence because they cannot afford representation in an increasingly complex and overstretched justice system. Judges are right to be concerned about fake citations and procedural errors. Yet dismissing AI use outright misses the deeper issue — access to justice has been under strain for years, and technology is now filling the gap.

The “Vibe Lawyer” Moment: AI, Litigants in Person, and the Coming Shockwave for the Family Courts

By Jessica Susan Hill | JSH Law

Key Takeaways (Read This First)

  • AI is already changing litigation behaviour — the judiciary is explicitly preparing for a surge in AI-generated claims across civil, family and tribunals.
  • The risk isn’t “AI” — it’s unverified AI: fabricated authorities and confidently wrong submissions waste court time and damage credibility.
  • LiPs are not “wreaking havoc” for fun. Many are doing what they must to participate in a system they cannot afford to navigate with representation.
  • The solution is guardrails, not barriers: verification standards, procedural literacy, and responsible workflows that help the court as well as the litigant.
  • Family proceedings are high-stakes. Used properly, AI can improve clarity and evidence organisation; used badly, it can derail safeguarding analysis and case management.

1. Why this matters now

“Vibe lawyers” is a catchy label, but it risks obscuring a far more serious reality: litigants in person are using AI tools to draft complaints, defences, witness statements and skeleton arguments at scale — and the courts are already feeling the impact. The phenomenon is now so visible that Sir Geoffrey Vos (Master of the Rolls, Head of Civil Justice) has explicitly warned that the judiciary must prepare for an “AI revolution” that may vastly increase the number of civil, family and tribunal claims the justice system must manage. His speech is worth reading in full. :contentReference[oaicite:0]{index=0}

Let’s be direct: the justice system in England and Wales is already stretched. Many court users already experience the process as opaque, intimidating and unaffordable. That is not a personal failing of litigants — it is a structural reality. AI is entering a pressure-cooker and magnifying what was already there: information asymmetry, procedural complexity, delay and the gulf between a represented party and an unrepresented one.

So, yes — judges and practitioners are right to be concerned about inaccurate AI-generated material clogging lists and adding burden to judges who are already firefighting. But it is also true that, in the medium term, AI could become one of the most significant access-to-justice tools we have ever seen. Both truths can exist at once.

2. The judiciary is not guessing — it is responding to lived reality

We are past the point of theoretical debate. The judiciary has been issuing speeches and guidance precisely because AI use is now operationally relevant. Beyond speeches, the Judicial Office has published updated guidance addressing risks including confidentiality, bias and “hallucinations” — where AI produces plausible but incorrect information. The October 2025 judicial guidance explicitly flags the danger of fictitious citations and misleading legal content. :contentReference[oaicite:1]{index=1}

Sir Geoffrey Vos has also repeatedly articulated a simple “core rules” approach: understand what the tool is doing, do not upload private/confidential data into public tools, and check the output before using it for any purpose. He set that out again in October 2025. :contentReference[oaicite:2]{index=2}

This is not anti-technology. It is the judiciary doing what it should do: protecting the integrity of the process while acknowledging that new tools are changing behaviour.

3. The real problem: “confidently wrong” submissions

Generative AI tools can draft impressive text quickly. But they do not “know” the law. They predict language. That difference matters profoundly in litigation. A well-written paragraph that contains an invented case, a misquoted statute or an inaccurate procedural route is not merely unhelpful — it can actively undermine a party’s credibility and force the court to spend additional time cleaning up the mess.

The legal profession has already seen what happens when verification fails. In June 2025, the Divisional Court (Dame Victoria Sharp P and Johnson J) dealt with the now widely-reported “fake authorities” problem in Ayinde v London Borough of Haringey and Al-Haroun v Qatar National Bank, where false citations and inaccurate quotations were placed before the court, with suspected or admitted use of AI tools without proper checks. The judgment is publicly available and makes required reading for anyone tempted to treat AI output as “good enough”. :contentReference[oaicite:3]{index=3}

Importantly, that judgment is aimed at lawyers — because professionals are held to professional standards. But the underlying point applies to everyone: accuracy is non-negotiable in court work. You can be passionate, traumatised, exhausted, and still required to file documents that are factually and legally sound.

4. Why litigants in person are using AI (and why the “money pit” narrative is wrong)

Many litigants in person feel they are treated as an administrative inconvenience — or worse, as a “cost centre” rather than a rights-holder. I understand why that perception forms. The system can be brutal: forms, deadlines, practice directions, directions hearings, orders you must interpret and comply with under stress. In private law children proceedings, you may be trying to protect a child, manage safeguarding concerns, and preserve your own mental stability while preparing documents that lawyers train for years to produce.

For a growing number of people, AI has become the first accessible “translator” of legal language. It can explain terminology, propose a structure for a statement, generate headings for a skeleton argument, and help a person who feels overwhelmed take a first step. That is why it feels like a shake-up. It is not because LiPs are trying to harm the system. It is because they are trying to participate in it.

And here is the hard truth: if access to representation continues to shrink in practice — whether by cost, availability, or scope — more people will use AI. That is not something a press headline can reverse. It is a reality the system must incorporate.

5. Family court is the pressure point

Family proceedings are where AI misuse can become most dangerous, because the stakes are often immediate and human: the child’s living arrangements, contact, safeguarding, allegations of domestic abuse, coercive control, substance misuse, mental health, relocation, schooling — the list is endless.

Private law children cases are ultimately governed by the welfare principle in the Children Act 1989, section 1. The court’s job is not to reward the best writer. It is to determine what best meets the child’s welfare needs. But poor drafting can still distort the court’s understanding of what matters. :contentReference[oaicite:4]{index=4}

And family procedure is its own ecosystem. The Family Procedure Rules and associated Practice Directions are not optional reading; they are the architecture of how your case moves through the system. PD12J (domestic abuse and harm) is particularly critical where abuse is alleged, because it shapes fact-finding decisions, safeguarding analysis and protective measures. :contentReference[oaicite:5]{index=5}

Where AI is used badly in family court, I commonly see the same patterns (and judges see them too):

  • Misstating legal tests (e.g., confusing civil and criminal standards, or quoting the wrong threshold framework).
  • Over-inclusion: 30-page narratives where only a small percentage is evidentially relevant.
  • Inflammatory language that escalates conflict rather than centring the child.
  • Procedural fantasy: “applications” and “orders” that do not exist or are not procedurally available.
  • Fake authority: citations that sound real but are not verifiable.

Those problems do not just “waste time”. They can change outcomes. They can harden judicial perceptions. They can reduce a litigant’s credibility. And in safeguarding contexts, credibility matters.

6. But here is the opportunity: structured AI use can help the court

Now for the other side of the ledger, which the “vibe lawyer” framing often ignores.

Used properly, AI can reduce noise and increase clarity. It can help an overwhelmed litigant present their case in a way that judges can actually work with. It can support:

  • Chronology building (dates, events, orders, and key turning points).
  • Document organisation (indexes, exhibit lists, consistent naming).
  • Issue framing (what is the dispute actually about?).
  • Drafting clarity (headings, structure, neutral tone).
  • Summarising communications (WhatsApp/SMS/email) into court-usable bundles.

Those are not cosmetic benefits. They are directly aligned with what the court needs: efficient case management, focused evidence, and parties who can articulate relevant issues.

In other words: the best version of AI in litigation is not “AI replaces lawyers.” It is “AI helps people present usable material so the court can do its job.” That is the access-to-justice promise.

7. The non-negotiable: verification

The line between empowerment and chaos is verification.

Professional regulators have been clear that AI cannot be trusted to judge its own accuracy. The SRA has warned about hallucinations and the risk of plausible but incorrect outputs, including non-existent cases. :contentReference[oaicite:6]{index=6}

For court users, this translates into a simple operating standard:

  • If you cite it, you must be able to prove it exists (case name, neutral citation, and a reliable source).
  • If you quote a statute, check it on legislation.gov.uk (not in an AI chat box).
  • If you refer to rules or practice directions, check the official source (FPR/CPR/PD pages).
  • If it sounds “too perfect”, slow down — AI is very good at confidence, not always good at truth.

After the June 2025 “fake authorities” judgment, the direction of travel is obvious: courts will increasingly treat fabricated or careless citations as serious misconduct where professionals are involved, and as a significant credibility issue where litigants are involved. :contentReference[oaicite:7]{index=7}

8. A real-world cautionary tale: Mata v Avianca

Even outside the UK, courts have reacted strongly when lawyers filed AI-generated fake authorities. The widely-cited US case Mata v Avianca resulted in sanctions after fabricated case citations were submitted. It is not “UK law”, but it is a stark illustration of what happens when verification collapses. :contentReference[oaicite:8]{index=8}

Why mention it here? Because the underlying professional lesson travels: courts do not have time for invented law, and they should not have to spend scarce judicial time correcting avoidable errors.

9. What this means for litigants in person

What This Means for LiPs (Practical Guidance)

1) Use AI to organise, not to “source” law. AI is excellent for structure, headings, summaries, chronologies and drafting tone. It is unreliable as a sole source of legal authority.

2) Keep it child-focused (family cases). Remove insult, speculation and “character assassination”. Judges need facts, evidence, and impact on the child.

3) Treat every AI output as a draft. You are responsible for what you file. Read it. Edit it. Make sure it matches your evidence.

4) Verify every citation. If you cannot open the case or locate it on a reputable database, do not rely on it.

5) Don’t upload confidential material into public AI tools. Safeguarding details and private communications should be handled carefully. Follow the Judicial Office warnings on confidentiality. :contentReference[oaicite:9]{index=9}

6) Aim for shorter, clearer documents. Judges do not reward length. They reward relevance. A focused 6–10 pages often lands better than a sprawling 30.

7) If you’re stuck, get human oversight. A short consultation to sanity-check structure, compliance with directions, and relevance can prevent months of damage.

10. What this means for the justice system: guardrails, not barriers

If the system responds to AI by “closing ranks” and shaming litigants, it will fail. People will still use AI — but they will do so in worse, more chaotic ways. A better approach is to develop common standards that increase quality and reduce burden.

In practice, that means three things.

A) Judicial clarity

Courts and judiciary leadership can help by setting clear expectations about what is acceptable in written submissions — particularly around citation verification and disclosure of AI use where relevant. The Judicial Office guidance is already laying the foundation here. :contentReference[oaicite:10]{index=10}

B) Procedural literacy for court users

Most problems I see are not “bad people”. They are overwhelmed people. The system needs short, accessible, official pathways explaining (for example) what a directions hearing is, how to comply with an order, how to prepare a bundle, and how to draft a witness statement that is relevant rather than reactive.

C) Responsible support models

This is where the best “shake up” lies: hybrid support that uses AI to accelerate organisation and drafting, with human oversight to ensure compliance, accuracy, relevance and tone. That model benefits everyone: the litigant, the other party, and the court.

11. A note on professional standards (and why it still matters to LiPs)

When professionals file inaccurate material, the consequences can be severe, including regulatory referral. That was made explicit in the June 2025 judgment dealing with false citations. :contentReference[oaicite:11]{index=11}

LiPs are not held to the same professional code — but the practical consequences can still be harsh: credibility erosion, judicial impatience, adverse costs risks in some contexts, and (most importantly) a judge simply not trusting what they are reading. In family court, loss of credibility can be profoundly damaging.

This is why “AI literacy” is not an academic luxury. It is a procedural survival skill.

12. Conclusion: the future is responsible AI, not no AI

AI is in the courtroom ecosystem now. The judiciary is preparing for it. Regulators are warning about it. The profession is adapting to it. The question is not whether litigants in person will use AI — they already are.

The question is whether we will build a culture of responsible use.

Used recklessly, AI produces noise: invented authorities, misunderstood legal tests, and sprawling submissions that burden the court. Used properly, it can produce clarity: structured chronologies, coherent statements, and focused issues that help the court get to the real substance of the case.

If we care about access to justice, we cannot treat litigants in person as an administrative irritation. We should treat them as court users with rights and responsibilities — and we should equip them with tools and guardrails that allow them to participate meaningfully.

That is the “AI revolution” that matters: not chaos, but capability.


Useful Official Resources

If you want structured, responsible help using AI to prepare court documents (without risking accuracy or credibility), you can book a short consultation below:


Regulatory & Editorial Notice (JSH Law): This article is published for general information and public-interest commentary only. It does not constitute legal advice and should not be relied upon as such. Where this article refers to third-party sources (including court judgments, guidance, regulator publications, media reporting, or external organisations), those references are provided for context and convenience; JSH Law does not control or endorse third-party content and cannot guarantee its accuracy, completeness, or continued availability. Court users should always consult the original primary sources (including the Family Procedure Rules, Practice Directions, and judgments) and obtain appropriate professional advice for their specific circumstances.

When Court Data Disappears: Why Transparency in Family Courts Matters More Than Ever

In February 2026, the Ministry of Justice ordered the removal of a major archive of court listing data, citing data protection concerns and alleged misuse involving AI. On the surface, it looked like a dispute about compliance. In reality, it raises a far more serious question: what happens when the justice system becomes less visible? For families navigating private law disputes, safeguarding allegations and prolonged delay, transparency is not a political slogan — it is the difference between understanding how the system works and feeling powerless within it.

Key points (read this first)

  • “Open justice” is not a vibe. It is a constitutional principle: the public must be able to see justice being done — in practice, not just in theory.
  • The Courtsdesk database mattered because it made magistrates’ court activity discoverable at scale — across regions, trends and time — in a way ordinary listings often do not.
  • The MoJ/HMCTS position has centred on data protection and alleged unauthorised sharing with an AI third party (including potentially sensitive identifiers). That is a serious issue — but it doesn’t automatically justify a “delete the archive” outcome.
  • There is now a live policy tension: privacy compliance vs public scrutiny. The correct answer is not to pick one. It is to design lawful access with safeguards.
  • AI changes the stakes. It can expose systemic court failures (delays, inconsistency, outcomes), but it can also amplify privacy harm if governance is weak.
  • What to watch next: licensing frameworks, official listing portals, retention/archiving rules, and whether any independent oversight is built into the “new” regime.

If you only have 60 seconds: the question isn’t “should court data exist?” — it’s “who controls access, under what rules, with what accountability?”

When Court Data Disappears: Courtsdesk, the MoJ Deletion Order, and What “Open Justice” Means in the AI Age

By Jessica Susan Hill | Legal Consultant & McKenzie Friend | JSH Law Ltd

In February 2026, a story surfaced that should make every lawyer, journalist and court-user sit up: the Ministry of Justice (via HMCTS) instructed a private platform, Courtsdesk, to delete what was widely described as the UK’s largest archive of court reporting data. The dispute was framed as a data protection breach involving AI. Critics called it a major blow to open justice.

This isn’t a niche media row. It’s a governance problem with a constitutional wrapper. Because once court information becomes searchable at scale, it becomes auditable. And once the system becomes auditable, it becomes accountable.

1) What happened — and why the link you saw may have “stopped working”

If you clicked a share link to a paywalled newspaper, you’ll often get a broken experience (or a login wall). But the underlying issue is very real: in early-to-mid February 2026, multiple sources reported that the MoJ/HMCTS instructed Courtsdesk to remove court listing/archival data from its platform. The matter was then debated in Parliament, with ministers stating that action was taken because of data protection concerns and alleged unauthorised sharing with an AI company.

In the House of Commons debate on 10 February 2026, the government position was put bluntly: HMCTS stopped sharing data and instructed the company to remove data from its digital platform because the government considered personal data had been put at risk and/or shared in breach of agreement. (Hansard: “Court Reporting Data”). Read the Commons debate (Hansard).

The House of Lords revisited similar themes on 11 February 2026, referencing alleged sharing of “private, personal and legally sensitive information” with a third-party AI company, including potentially addresses and dates of birth of defendants and victims. Read the Lords debate (Hansard).

Meanwhile, journalist bodies and open justice advocates argued that the deletion demand would reduce practical visibility of magistrates’ courts — the engine room of criminal justice — and undermine reporting capacity nationwide. NUJ response (11 Feb 2026).

Subsequent coverage indicated that the government later paused the deletion/purge approach and explored alternative licensing or arrangements, following significant public pressure and campaigning (including within national media). One example: The Times: MoJ halts purge of court archive (published Feb 2026). (Paywalled, but relevant for context and sequence.)

2) What is Courtsdesk — and why journalists cared

Courtsdesk is typically described as a platform that made it easier for journalists to discover and track magistrates’ court hearings — and to keep a searchable archive of what had been listed. The word “archive” matters. Without it, reporting becomes a daily scramble: you can see “today’s” list (sometimes), but you cannot easily analyse what happened across a month, a year, or a decade, and you cannot robustly check what patterns repeat across courts.

That changes the reporting model. Instead of “we got a tip and attended a hearing”, journalists can ask structured questions like:

  • Which courts are repeatedly listing the same offence type and outcome?
  • Are there geographical disparities in sentencing outcomes (controlling for offence and prior record)?
  • Is a particular safeguarding issue rising (domestic abuse, coercive control, breaches, stalking)?
  • Are certain hearings routinely not listed, listed late, or listed inaccurately?
  • Are “open” hearings being effectively closed by practical invisibility?

In short: a discoverable, searchable dataset turns open justice into something measurable. That is precisely why both open justice advocates and public interest reporters reacted so strongly.

For a short overview of the controversy as reported at the time: Legal Cheek (11 Feb 2026). For a more analytical legal-media perspective: Wiggin LLP commentary (16 Feb 2026).

3) The MoJ/HMCTS case: “data protection” and alleged sharing with AI

The government’s public position, as reflected in parliamentary statements, has been that data protection responsibilities were engaged. The allegation was not merely that the data existed, but that data was used or shared in a way that was not authorised by the relevant agreement — and that the information at issue could include sensitive personal identifiers.

In the Commons debate, MPs referenced the passing of information to an AI company, including addresses and dates of birth. You can read the relevant passages directly in Hansard: Court Reporting Data (Commons, 10 Feb 2026). The Lords debate similarly framed the core concern as sharing private/personal legally sensitive information with a third-party AI company: Court Reporting Data (Lords, 11 Feb 2026).

Let’s be clear: if victim or defendant identifiers were exposed or processed without a lawful basis, proper security, or appropriate contractual control, that is not a minor technicality. UK GDPR compliance is not optional — particularly where data could create direct risk (victim location, stalking risk, retaliation, intimidation, vigilante harm).

But there is a second question — and this is where policy and constitutional principles collide: even if a breach occurred, does the proportionate remedy have to be “delete the archive”? Or is the correct remedy:

  • Stop the unauthorised processing,
  • Investigate,
  • Implement governance, redaction, licensing and audit controls,
  • And preserve the public-interest value of the dataset?

In other regulated sectors, “burn the library” is rarely considered an intelligent response to a governance failure. You fix governance. You don’t erase institutional memory.

4) What “open justice” actually requires (and what it doesn’t)

“Open justice” is often described as a constitutional principle in common law: justice must be administered in public, with reporting permitted, because scrutiny is a safeguard against arbitrariness and abuse. It supports legitimacy and public confidence.

But open justice is not absolute. Courts can restrict reporting, anonymise parties, hold parts of hearings in private, or impose reporting restrictions where necessary and proportionate — especially to protect children, victims, national security, or the integrity of proceedings.

Here’s the practical point: open justice collapses when information is technically “available” but realistically undiscoverable. If court lists are incomplete, delayed, inaccurate, scattered, or accessible only through relationships and workarounds, then public scrutiny becomes selective and fragile.

A searchable archive changes the baseline. It doesn’t guarantee perfect scrutiny, but it makes scrutiny possible at scale.

The NUJ response captures the concern in direct terms: the state must take data protection seriously, but journalists are worried about the effect on their ability to do their job. NUJ: deletion order response.

5) The real issue: discoverability, not secrecy

Most people misunderstand how court reporting works. They think journalists can simply “look up” what is happening in court.

In practice, magistrates’ courts are high-volume. Hearings move. Lists change. Data may be published late, inconsistently, or in formats that are difficult to search. Court staff are under pressure. Press offices (where they exist) are stretched. The result is that what is formally “public” can become practically opaque.

So when people say “this undermines open justice,” they may not mean “the government is hiding a single case.” They mean: remove the infrastructure of discoverability and you reduce systemic scrutiny.

The wider concern is that once the system is not audited at scale, dysfunctional patterns persist:

  • Overlisting and adjournment churn;
  • Chronic delay;
  • Inconsistent listing practices;
  • Variable use of reporting restrictions;
  • Localised cultures that drift without challenge.

This is where AI becomes relevant — not as hype, but as a tool. AI is exceptionally good at extracting patterns from messy, fragmented data. And patterns are exactly what the justice system needs to be forced to confront.

6) AI: the uncomfortable accelerator of accountability

Here is the uncomfortable truth: AI makes “open justice” more powerful, because it can transform raw listings and outcomes into insight:

  • Where are outcomes diverging without explanation?
  • Which courts are systematically underperforming on timeliness?
  • Which offence types are rising or falling?
  • Do bail decisions correlate with geography in ways that look unjustified?
  • Are certain safeguarding concerns being deprioritised?

For the public, this can mean better scrutiny and informed reform. For institutions, it can feel like a loss of narrative control.

But AI also increases privacy risk. Aggregation is a form of power: data that is safe in one context can become dangerous in another when combined, enriched, or made searchable. That is why governance matters.

The question is not “AI or no AI.” It is: who is allowed to process court data with AI, under what licence, with what redaction, with what audit trail, and with what sanctions for misuse?

7) Data protection and open justice can coexist — if you design for both

If there was an unauthorised transfer of personal data to a third-party AI provider, that needs to be addressed. Strongly. But the correct fix is not necessarily deletion. The correct fix is a governance framework that takes seriously both:

  1. Lawful processing and security (UK GDPR; DPA 2018; contractual controls; access logs; DPIAs); and
  2. Open justice functions (discoverability; auditability; press access; public interest research).

A mature framework would include:

(A) Role-based access

Not everyone needs the same level of detail. A press-accredited journalist may need more than the general public. An academic researcher may need a structured dataset but not identifiers. A safety model is tiered access with clear rules.

(B) Default minimisation and redaction

Listings can be published in a way that is still meaningful but reduces harm: names may be necessary for open justice in many cases, but addresses and dates of birth generally aren’t. A “privacy by design” listing format is possible.

(C) Contractual control over processors

If AI tools are used, the relationship between controller and processor must be contractually controlled, audited, and limited. “Testing” is still processing. “Internal development” is still processing.

(D) Audit logs and sanctions

If a platform is given access to sensitive data, there must be a reliable audit trail and enforceable consequences for misuse.

This is the kind of approach the state should model. It’s what we demand of the private sector. The justice system should not be a governance laggard.

8) “Just use official channels” is not a sufficient answer

One argument raised in public discussion is that journalists can still access listings through official HMCTS channels, so the deletion of a private archive is not fatal.

Here’s the hard reality: official availability does not necessarily equal practical usability. The difference between:

  • a fragmented set of daily lists, and
  • a searchable, longitudinal archive

is the difference between “seeing a hearing” and “auditing a system”.

It’s the audit function that scares people — and it’s the audit function that reform needs.

For contemporaneous legal-sector analysis and a timeline-style overview, see: Wiggin LLP commentary.

9) The proportionality question: why “delete it” feels extreme

When government acts, it must act proportionately — especially when its actions collide with constitutional principles.

If the problem was a specific breach, a proportionate response normally looks like:

  • Stop the unlawful processing immediately;
  • Preserve evidence;
  • Investigate scope and impact;
  • Notify where legally required;
  • Fix governance;
  • Implement redaction and access controls;
  • Resume service under a compliant licence.

Deleting a historic archive can be justified in certain cases — for example, if the archive itself is irredeemably unsafe and cannot be lawfully held. But that is a high threshold. And if that threshold is met, the next question is: why was the data shared in that form in the first place, and why was it not already governed appropriately?

Open justice is a public asset. When you destroy an archive that underpins scrutiny, you don’t merely “solve” a compliance problem — you erase a public accountability mechanism.

10) What this means for litigants, victims and the public

This is not only about journalists. It touches:

Victims and vulnerable witnesses

Privacy matters. Safety matters. If addresses/DoBs are handled recklessly, it can cause real-world harm. A governance regime must centre safeguarding and risk. The state is right to be strict about that.

Defendants

Defendants have rights too. Public identification can be lawful and appropriate in open court, but bulk data aggregation can create long-tail harm (employment, housing, vigilantism), particularly where cases end in acquittal or discontinuance. This is why minimisation and careful retention rules matter.

The public

The public interest in open justice is not abstract. It includes the ability to scrutinise how domestic abuse is treated, how repeat offenders are sentenced, how grooming cases are prosecuted, and whether systemic failures are being ignored.

The debate is often framed as “privacy vs transparency.” A better framing is: “privacy and transparency with engineering-grade governance.”

11) A practical blueprint for a lawful court data ecosystem

If we want open justice that survives the AI era, we need to stop improvising and start designing. Here is a blueprint that would satisfy most of the legitimate concerns on all sides:

  1. Define a canonical “public listing dataset” with minimised fields (no addresses; no full DoB; protect victims by default where appropriate).
  2. Publish in a consistent, machine-readable format so that “discoverability” is not dependent on private scraping or informal relationships.
  3. Implement a press and research licence with tiered access, clear contractual controls, audit logs, and enforcement.
  4. Create a secure research environment (think “data safe haven”) where higher-sensitivity data can be used for public-interest research under supervision.
  5. Mandate DPIAs for any new processing at scale, including any AI model training or automated analytics.
  6. Independent oversight: an external advisory panel including press, victims’ advocates, privacy experts and court users.

If you work in legal ops, you’ll recognise this: it is the same control architecture we use for health data, financial data, and regulated client data. The justice system deserves no less.

12) What you can do if you care about this

  • Read the parliamentary record and compare the stated rationale with the real-world impact: Commons Hansard (10 Feb 2026) and Lords Hansard (11 Feb 2026).
  • Track journalist-body positions (NUJ is a good start): NUJ statement.
  • Ask the right question of policymakers: “What is the new lawful access model — and who is responsible for ensuring discoverability in practice?”
  • Watch for licensing/market engagement notices and consultation opportunities. (Legal commentary sites often summarise these quickly.)
  • If you are a court user or practitioner, keep records. Transparency is partly built from bottom-up documentation — hearing notices, listings, orders, reasons, and procedural history.

Because here is the punchline: if the system cannot be seen, it cannot be improved. And if it cannot be improved, it cannot be trusted.

Sources and further reading

Regulatory & Editorial Notice (JSH Law Ltd)

This article is published for general information and public-interest commentary only. It does not constitute legal advice and should not be relied upon as such. JSH Law Ltd is not a firm of solicitors and does not provide regulated legal services. If you require legal advice, you should consult a suitably qualified and regulated legal professional.

Where this article refers to third-party reporting, parliamentary materials, organisations, or public cases, it does so for journalistic, educational, and research purposes. External links are provided for reader convenience; JSH Law Ltd is not responsible for the content of external sites.

© JSH Law Ltd | Company No. 16870438 | Manchester (UK) & Kansas (USA)

Why Family Court Transparency Matters: What the 30 January 2023 Reporting Pilot Meant for Parents and Litigants in Person

For decades, the family courts have operated in a space that is both necessary and uncomfortable: decisions of the highest consequence, made largely out of public view. On 30 January 2023, that began to shift. As reported by BBC News, a new transparency pilot allowed journalists to report on family court proceedings in a way that had not previously been possible. It was presented as a step towards accountability. But for parents and litigants in person, the real significance runs deeper—because scrutiny is not just about visibility, it is about whether the system can be properly trusted.

Why Family Court Transparency Matters: What the 30 January 2023 Reporting Pilot Meant for Parents and Litigants in Person

For years, one of the deepest frustrations for families caught up in the family justice system has been this: life-changing decisions are made behind closed doors, yet the people most affected often come away feeling unheard, disoriented, and unable to explain what has happened to them. That is why the 30 January 2023 transparency pilot in the family courts mattered so much. It was not simply a procedural reform for journalists. It was a recognition that secrecy, however well-intentioned, can also shield poor process, weak accountability, and profound injustice. For parents and litigants in person, that moment marked something important: the beginning of a more serious public conversation about what really happens in family court.

Key takeaways for litigants in person

1. Greater transparency in family court is not about sensationalism. It is about accountability, scrutiny and public confidence.

2. The family court deals with some of the most serious and intimate decisions the state can make about children and families.

3. For too long, many parents have felt unable to challenge what happened because the system has been too closed for meaningful scrutiny.

4. Journalists being allowed to report from family court was an important step, but it was never a complete answer on its own.

5. Litigants in person still need to be organised, informed and strategically prepared. Transparency helps, but it does not remove the need to present your case properly.

If you need strategic support with your family court case, chronology, statement, position statement, bundle preparation or hearing preparation, you can book a short initial call below.

What changed on 30 January 2023?

On 30 January 2023, a reporting pilot began in family courts in Leeds, Carlisle and Cardiff. Accredited journalists were to be allowed to report on proceedings in a way that had not previously been possible in any meaningful sense. The intention was to enable closer scrutiny of the family courts, the conduct of local authorities, and the broader decision-making machinery operating in cases involving children.

That may sound modest. In reality, it was significant.

The family courts decide some of the most sensitive issues the law can ever touch: whether a child should be removed from their family, whether parents should be restricted in seeing their children, whether allegations of neglect, abuse, coercion or risk are made out, and whether the state should intervene permanently in family life. These are not minor procedural questions. They are fundamental decisions with lifelong consequences.

Yet despite the seriousness of those decisions, family proceedings have long existed in a space where privacy and secrecy have become difficult to disentangle. Privacy for children is essential. That is not in dispute. But privacy for children is not the same thing as insulation of institutions from scrutiny.

Why this mattered so much

The strongest part of the reporting around the pilot was not simply that a rule was changing. It was the explanation of why scrutiny mattered in the first place.

One of the families referenced in the coverage was that of Liz Anstey, who described the family court process as surreal, traumatic and deeply confusing. She spoke of not knowing who was who, of hearings being adjourned, and of struggling to understand what was going on. That description will resonate with far too many parents.

It should not be normal for people to come out of proceedings affecting their children feeling as though they have fallen into a procedural rabbit hole. Yet many do.

For litigants in person especially, family court can feel like a system with its own language, its own hidden rules, and its own hierarchy of professionals speaking over the lives of ordinary people. Even where the legal process is attempting to do justice, the lived experience can still be one of disempowerment.

That is why scrutiny matters. Not because every complaint made by every parent will be justified. Not because every judicial decision is wrong. But because a justice system that cannot be properly observed will always struggle to command confidence.

The long road to transparency

The 2023 pilot did not appear out of nowhere. It followed decades of pressure, criticism and frustration.

There have been repeated calls over many years for family courts to be opened up to greater scrutiny. Those calls grew louder after cases in which serious errors or alleged miscarriages of justice became publicly known. The concern was never simply that family proceedings were private. The concern was that a private system can become a system in which accountability is too weak, patterns are too difficult to identify, and public understanding is distorted by the absence of real information.

As the article explains, there were previous attempts to increase transparency. In 2009, journalists were allowed into family court hearings, but the practical effect was limited. The rules were too unclear. Reporting remained heavily constrained. Journalists could attend, but not in a way that made meaningful public reporting realistic in most cases.

That distinction is important.

There is a world of difference between being nominally allowed into a courtroom and being able to report in a way that actually informs the public. If a journalist cannot identify the local authority, cannot speak to the family, cannot explain the core facts, and cannot describe the decision in a coherent way, then what exists is not real open justice. It is a carefully managed appearance of it.

Why “private” should never mean “beyond scrutiny”

Family cases are heard in private for good reason. Children must be protected. Their identities, welfare and futures must not be exposed to public harm. That principle is sound and necessary.

But there has always been a dangerous slippage in public debate: the assumption that because proceedings are private, detailed scrutiny is somehow inappropriate or impossible.

That is wrong.

The justice system should be capable of doing two things at once: protecting children’s anonymity while also allowing the conduct of professionals and institutions to be examined. Those aims are not contradictory. In fact, they should sit together. If anything, a system making decisions about vulnerable children should attract more careful scrutiny, not less.

The transparency debate has never really been about whether children should be named. They should not. It has been about whether the operation of the system itself should remain largely shielded from view.

That is where the reporting pilot mattered. It accepted, at least in principle, that anonymity for the child can coexist with proper public-interest reporting.

Why this issue matters to litigants in person

For litigants in person, the transparency issue is not abstract. It affects confidence, fairness and the perceived legitimacy of the whole process.

Parents representing themselves often feel that professionals enter the room with authority already attached to them. Cafcass officers, local authority social workers, experts, guardians, counsel and judges all operate within a system they understand. The parent may be the only person in the room trying to navigate it in real time.

When that process is then almost entirely shielded from outside scrutiny, the parent’s sense of powerlessness can intensify. Even where there are legal remedies, appeals or complaint routes, those mechanisms can be difficult, expensive, slow and procedurally complex. Many families do not have the resources to pursue them.

Transparency does not solve that problem entirely. But it changes the climate. It creates at least the possibility that poor practice, inconsistency, or systemic patterns may be seen and discussed.

And that matters, because courts and agencies behave differently when they know their conduct may be observed and reported.

The limits of transparency

It is also important to be realistic. Transparency is not a cure-all.

Allowing journalists to report on cases does not automatically prevent bad decisions. It does not guarantee that all families will be treated fairly. It does not eliminate the structural disadvantages faced by litigants in person. And it does not remove the emotional and procedural pressure of family proceedings.

In some respects, transparency may even expose a further uncomfortable truth: that the problem was never only secrecy. It was also resources, culture, delay, evidential inconsistency, and the enormous discretionary power exercised within a stressed and overburdened system.

But transparency still matters because without it, those deeper problems are easier to ignore.

A closed system can always reassure itself that it is functioning well. A scrutinised system has to show its workings.

The human cost of family court decisions

One of the most powerful features of the earlier article was its reminder that family court reporting is not simply about legal principle. It is about human consequence.

There is a tendency in legal systems to become desensitised to process. Adjournments become routine. bundles become routine. directions become routine. expert reports become routine. But for the family living through the case, none of it is routine.

When a child is removed, when contact is suspended, when allegations are made, when a case drags on, when a hearing ends in tears outside court, those events are not procedural footnotes. They are pivotal moments in people’s lives.

That is one of the reasons meaningful reporting matters. It restores some human visibility to a system that can otherwise become dominated by anonymised process and professional shorthand.

It forces a wider public to confront what family justice actually does.

The issue of confidence in the system

Sir Andrew McFarlane’s observation at the time that there was “an absence of confidence” in the family courts due to a “vacuum of information” was, in my view, a strikingly honest one.

Confidence in family justice cannot be manufactured by insisting that the public should simply trust it. Trust has to be earned. And in any justice system, trust depends in part on visibility.

Where information is too scarce, rumour fills the gap. Where reporting is too constrained, suspicion hardens. Where people are told that everything is being done properly but cannot see how, confidence erodes.

That does not mean every criticism is well-founded. It means opacity is a poor foundation for legitimacy.

What parents should take from this

If you are a parent or grandparent involved in family proceedings, this issue should matter to you even if no journalist ever attends your hearing.

It matters because it signals a broader recognition that the family justice system cannot remain culturally closed if it wants public trust.

It matters because it validates something many families have been saying for years: that the system can feel inaccessible, confusing and unaccountable.

And it matters because it underlines the importance of presenting your case in a way that is clear, disciplined and evidence-led. In a more transparent system, the quality of process becomes more visible. That means your own preparation matters too.

If you are self-representing, ask yourself:

Can I explain my case clearly?

Do I have a proper chronology?

Have I distinguished fact from allegation?

Have I focused on the child’s welfare rather than only my own grievances?

Do I understand what order I am asking the court to make and why?

Transparency may shine more light on the system, but you still need to be ready to stand in that light with a properly prepared case.

My own view

I have long taken the view that privacy for children must be preserved, but that this should never be used as a reason to avoid examination of how the family courts actually operate.

The stakes are simply too high.

When the state intervenes in family life, when children are removed, when contact is curtailed, when professional opinions shape outcomes, and when judicial discretion carries lifelong consequences, accountability is not optional. It is essential.

The 30 January 2023 pilot was important because it represented a serious move away from the idea that family justice can rely on closed-room legitimacy. It accepted that if the public is to have confidence in the system, the system must be prepared to be seen.

That does not weaken justice. It strengthens it.

Final thoughts

The family courts deal with some of the most painful and consequential decisions in the legal system. They will never be easy places. Nor should they become spectacles.

But neither should they remain so closed that only fragments of truth emerge, and only after years of campaigning, appeals, or extraordinary effort.

The 2023 transparency pilot mattered because it recognised that accountability and child protection can coexist. It recognised that secrecy is not the same as safety. And it offered, at least in part, a route towards a family justice system that could be better understood, better scrutinised and, perhaps in time, better trusted.

For litigants in person, that was and remains a development worth paying close attention to.


Need help preparing for family court?

If you are facing private children proceedings and need clear, strategic support, book a 15-minute initial consultation to discuss your case, your next steps, and how to approach proceedings with greater confidence.

Practical litigation support. Clear strategy. Confidence before your next hearing.


Useful links


Regulatory & Editorial Notice: This article is published by JSH Law Ltd for general information, commentary and public legal education only. JSH Law Ltd is not a firm of solicitors and does not provide reserved legal activities or regulated legal services. Nothing in this article constitutes legal advice, representation, or the formation of a solicitor-client relationship. Family court cases turn on their own facts, evidence, judicial evaluation and procedural history. Readers should obtain advice tailored to their own circumstances before taking or refraining from any step in litigation. Commentary on public reporting, court reform, institutions or third-party materials is editorial in nature and is presented in good faith on the basis of sources believed to be reliable at the time of publication.