NewsNight: Three Stories and a Cuppa

Episode: Legal Aid Leaks and Light-Touch Laws
Saturday, June 7th, 2025
Tech Security Neurodivergent Policy

INTRO

Right then, welcome to NewsNight: Three Stories and a Cuppa. I'm here with my Saturday evening brew to walk you through three stories from around the pond that caught my eye this week - the sort of news that matters if you're trying to keep systems running, data secure, or just navigate the world with a brain that works a bit differently.

This week's episode is "Legal Aid Leaks and Light-Touch Laws," and we've got a data breach that's genuinely terrifying in its implications, some political theatre around AI regulation that would be funny if it weren't so consequential, and a depressing reminder that the tech industry still can't figure out how to support the very people driving its innovation.

So grab whatever's in your mug, and let's dive in.


STORY 1: UK Legal Aid Agency Data Breach

First up, and this one's properly grim. The UK's Legal Aid Agency suffered a massive cyberattack in April that's just been revealed to have compromised data on over 2 million people who applied for legal aid since 2010.

Now, when we talk about "legal aid applicants," we're not talking about your average data breach victims. This includes some of the most vulnerable people in society - domestic violence survivors who've had to flee their homes, people seeking asylum, folks caught up in criminal cases, people in family court disputes. The hackers are threatening to publish this data online, which could literally put people's lives at risk.

What's particularly notable is they discovered the breach on April 23rd but only realised how extensive it was on May 16th. So we've got vulnerable people's most sensitive data floating about whilst it took weeks to understand the full scope.

Think about what's in those files: addresses of people in witness protection, details of domestic abuse cases, immigration statuses, financial information from people who couldn't afford legal representation. If this data gets dumped online, it's not just embarrassing - it's potentially lethal.

And here's the kicker - this is exactly the sort of high-value, low-security target that's becoming increasingly attractive to ransomware groups. They know that threatening to expose this kind of data creates maximum pressure for payment whilst causing maximum societal harm if they follow through.


STORY 2: AI Regulation Bill Returns (Sort Of)

Right, moving on to some political theatre. The Artificial Intelligence (Regulation) Bill has been reintroduced to Parliament after getting shelved during the election. It's a Private Member's Bill trying to create some actual oversight of AI systems.

Meanwhile, promised AI legislation has been pushed back until at least summer 2025, aligning with the US's hands-off approach. The UK even declined to sign the Paris AI safety declaration that 66 other countries signed - you know, the one about making AI "safe, secure and trustworthy." Apparently that was too much commitment.

So whilst the EU gets comprehensive AI rules that actually have teeth, the approach here is basically "innovation first, ask questions later." Because what could possibly go wrong with unregulated frontier AI models that can generate convincing deepfakes, manipulate public opinion, or potentially develop capabilities their creators don't fully understand?

The timing's particularly rich given our first story. Here we are, dealing with a massive data breach affecting vulnerable people, and the response to technologies that could make such breaches exponentially worse is... let's wait and see how it goes.

It's like watching someone refuse to install smoke detectors because they might interfere with the ambience whilst the kitchen's already smouldering.


STORY 3: Neurodiversity in Tech Still Struggling

And finally, a story that should make anyone in tech deeply uncomfortable. A massive survey of over 2,000 tech workers revealed that only 9% of neurodivergent employees had actually requested workplace adjustments.

Now, before you think "well, maybe they don't need them," here's the breakdown: of those who didn't ask, 32% were worried about how it would look and 29% didn't know what to ask for. Only 61% actually thought they didn't need adjustments.

Meanwhile, estimates suggest neurodivergent people make up 17% of the workforce but remain significantly underrepresented in tech. So there's this massive pool of talent - people who can spot patterns others miss, hyper-focus on complex problems, think differently about solutions - but they're not asking for basic accommodations because they're worried about stigma or simply don't know what's available.

The survey covered major tech companies, and it found neurodivergent employees struggled with everything from the hiring process to day-to-day work interactions. Nearly 40% found salary discussions challenging, 24% struggled with CV creation, and 21% found face-to-face interviews difficult.

Here's what's particularly frustrating: many of the accommodations that would help neurodivergent employees are either free or incredibly cheap. We're talking about things like written instructions instead of verbal ones, noise-cancelling headphones, flexible start times, or just being clear about meeting agendas in advance.

But instead, we've created this culture where people are afraid to ask for what they need, don't know what's possible, or assume they just have to struggle through.


BONUS STORY: The Big Beautiful Bill's AI Ban

And here's a bonus story from across the pond that's properly mental. The US House just passed the "One Big Beautiful Bill Act" - and buried in this budget reconciliation package is a 10-year federal ban on all state-level AI regulation.

The language is brutally simple: "No State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act."

So let's get this straight. This would preempt existing state AI laws in California, Colorado, New York, Illinois, and Utah, as well as more than 1,000 pending AI bills across state legislatures. That includes laws protecting against AI-generated explicit material, deepfakes designed to mislead voters, and algorithmic rent-setting systems.

The kicker? Even Marjorie Taylor Greene is opposing this, saying "I am adamantly OPPOSED to this and it is a violation of state rights and I would have voted NO if I had known this was in there." When MTG is calling something an overreach of federal power, you know you've really gone off the rails.

Constitutional law experts are already predicting this could spark a constitutional crisis, arguing it contradicts states' status as "laboratories of democracy". 40 state attorneys general have signed a letter opposing it, and over 70 advocacy groups published an open letter saying it gives AI companies "exactly what they want: no rules, no accountability, and total control."

It's like watching someone set fire to the concept of federalism whilst claiming they're protecting innovation. Brilliant.


COMMENTARY

It's a proper mess, isn't it? We've got organisations losing control of vulnerable people's data whilst simultaneously seeing resistance to regulating the AI systems that could make these breaches even worse. Meanwhile, across the pond, they're actively trying to ban the very concept of AI oversight for a decade. And all of this whilst the tech industry - which should be leading on accessibility and inclusion - is still failing to support the very people whose different thinking styles drive innovation.

It's like we're watching a coordinated effort to remove any guardrails just as the technology becomes powerful enough to cause real harm. Legal aid data gets breached, AI regulation gets delayed or banned entirely, and neurodivergent talent gets ignored - because apparently the priority is making sure nothing slows down the innovation train, even if it's heading for a cliff.


OUTRO

Right, that's your three stories and whatever was left in my mug.

If any of this has got you thinking, drop me a line. And if you know someone who works in tech, policy, or just needs to hear that their brain works fine even if it works differently, maybe share this along.

Next week, I'll be back with three more stories and hopefully a stronger brew. Until then, keep your data encrypted, your thinking diverse, and your expectations of common sense appropriately low.

This has been NewsNight: Three Stories and a Cuppa. If you want to read more about any of the stories I've covered, head to store.boggs.one, hit the ancillary menu, and look for NewsNight - News Sources.

See you next Saturday.