Britain is Trying to Censor Americans – But America is Fighting Back

From THE DAILY SCEPTIC

by Daniel Lü

Ofcom has confirmed it is referring 4chan to a final enforcement decision under the Online Safety Act. The target is a Delaware company that runs an entirely anonymous imageboard from the United States, with no offices, staff, servers or assets in Britain. The demand: install age-verification systems and content filters so that British children cannot access the site or face daily fines levied from London on an American platform. This case is not an outlier. It is the clearest real-world demonstration of what the new generation of “online safety” laws requires: private companies must build automated filters that decide, in advance, which legal speech is too harmful for minors to see. The question the regulators never quite answer is simple: what exactly does the filter catch?

In the early 2020s, a political consensus formed on both sides of the Atlantic: social media is harming children and something must be done. The result in Washington was the Kids’ Online Safety Act (KOSA); in Westminster, the Online Safety Act (OSA), which received Royal Assent in October 2023 and began enforcement in 2025. The political appeal of both measures is genuine. Adolescent mental health deteriorated in the 2010s, parents are alarmed and platforms have appeared indifferent. But good intentions do not make good law, and the form these interventions took is constitutionally and morally indefensible. Both KOSA and the OSA rest on a duty-of-care model: platforms must take “reasonable measures” or implement “proportionate systems” to prevent minors from encountering content associated with depression, anxiety, eating disorders, self-harm and suicide. This is not a regulation of conduct. It is a mandate to suppress speech based on its topic and its predicted emotional effect on a reader: the very definition of content-based regulation.

The American Civil Liberties Union (ACLU) stated the constitutional problem plainly in its July 2023 letter opposing KOSA: the bill “is a content-based regulation of constitutionally protected speech” that “will silence important conversations, limit minors’ access to potentially vital resources and violate the First Amendment”.  Under Reed v. Town of Gilbert, a law is content-based if it “applies to particular speech because of the topic discussed or the idea or message expressed”. Content-based regulations are “presumptively unconstitutional”.

The ACLU identified three specific constitutional failures. First, the speech targeted is protected. The Supreme Court has never permitted government to suppress legal speech simply because a legislature finds it unsuitable for children. In Brown v. Entertainment Merchants Association, the Court was unambiguous: “Speech that is neither obscene as to youths nor subject to some other legitimate proscription cannot be suppressed solely to protect the young from ideas or images that a legislative body thinks unsuitable for them.” Creating a “wholly new category of content-based regulation” permissible only for speech directed at children would be “unprecedented and mistaken”. Second, these regimes fail strict scrutiny because they are not premised on demonstrated causation. As the ACLU wrote, KOSA “is not premised on a direct causal link, but instead is based on correlation, not evidence of causation”. This is a decisive legal and moral point. In Brown, the Court struck down California’s video game restriction on exactly the same grounds: the state had produced only correlative data. A law that restricts the speech of millions of people must show that the restriction will actually prevent the harm it identifies. Neither KOSA nor the OSA can clear that bar. Third, these regimes are both under- and over-inclusive. They leave news media, books, music and magazines entirely unregulated while targeting social media platforms. And they will, inevitably, sweep up beneficial speech alongside harmful speech: 92% of parental control apps have been found to incorrectly block LGBTQ+ content and suicide-prevention resources alongside material that is genuinely harmful. Congress, the ACLU concluded, may not rely on unproven future technology to save the statute.

The empirical premise of both regimes is that social media causes mental illness in adolescents. This claim is contested by a substantial body of peer-reviewed research. In a widely noted book review in Nature, Candice L. Odgers, a psychologist specialising in adolescent mental health at UC Irvine, wrote that the graphs produced by Jonathan Haidt in his work The Anxious Generation, which align the rise in teen mental illness with smartphone adoption, “will be useful in teaching my students the fundamentals of causal inference, and how to avoid making up stories by simply looking at trend lines”. Hundreds of researchers, Odgers wrote, “have searched for the kind of large effects suggested by Haidt. Our efforts have produced a mix of no, small and mixed associations. Most data are correlative.” The direction of causality may run the other way: distressed and isolated adolescents gravitate toward online community; social media does not necessarily create the distress.

The practical implication is stark. Existing criminal law already covers the most serious harms comprehensively: child sexual abuse material (CSAM), terrorist content, incitement to violence and harassment are all criminal in both jurisdictions and all designated “priority illegal content” under the OSA’s Schedules 5-7. The genuinely novel element of both regimes is the duty to suppress legal speech about mental health, gender identity and emotional distress. That element is what fails both the First Amendment and basic proportionality analysis.

The most immediate and documented casualty of the OSA’s implementation has been LGBTQ+ communities. This is not an implementation error. It is structural: the content filters platforms deploy to comply with age-assurance obligations cannot distinguish between content that causes harm to LGBTQ+ youth and content that protects them. Following the July 2025 enforcement rollout, Reddit moved significant LGBTQ+ community content behind age-verification barriers on the logic that queer content is “adult content” and therefore, under the Act, presumptively harmful to children. As OpenDemocracy documented, content creators who are “queer, trans or racialised”, or whose content focuses on these communities, have been “disproportionately targeted, with anything ‘queer’ indiscriminately labelled as ‘adult’”. For trans people, the harm is compounded by the identity documentation problem. Age verification requires users to produce government-issued identity matching their legal name and sex. In 2018, fewer than 5,000 trans people in the UK held a Gender Recognition Certificate, out of an estimated 200,000-500,000. For those without legal gender recognition, age verification is not a minor inconvenience, it forces them to out themselves to a commercial third party as a condition of internet access, creating a permanent record linking their legal identity to spaces they may be using precisely to explore their identity in safety. The moral stakes here are not abstract. For LGBTQ+ young people who cannot be open at home or school, online community is not a convenience but a lifeline. Stonewall has warned that anonymity-reduction measures create a “chilling effect” that puts LGBTQ+ people in genuine danger, particularly in the 12 countries where being LGBTQ+ carries the death penalty. As Stonewall’s Director of External Affairs wrote: “The UK’s Online Safety Bill could become the playbook for countries looking to use digital surveillance to identify and persecute their LGBTQ+ citizens.” The US State Department’s 2024 Human Rights Practices Report criticised the OSA for pressuring US social media platforms to “censor speech deemed misinformation or hate speech”.

The regulatory pressure on US platforms is not confined to Ofcom. On February 24th 2026, the Information Commissioner’s Office (ICO), the UK’s independent data protection regulator, issued Reddit, Inc. a £14.47 million fine for unlawfully processing children’s personal information: the largest penalty the ICO has ever imposed for breaches of children’s privacy. The ICO found that Reddit, despite prohibiting users under 13 by its terms of service, applied no robust age assurance mechanism from May 2018 until July 2025, and therefore had no lawful basis for processing the personal data of under-13s under the UK General Data Protection Regulation. Reddit’s omission to carry out a data protection impact assessment (DPIA) focused on the risks to children before January 2025 separately breached Articles 5, 6, 8 and 35 of the UK GDPR. Reddit has announced its intention to appeal, calling the ICO’s requirement to collect identity information from users “counterintuitive and at odds with our strong belief in our users’ online privacy and safety”. The ICO acted under its Age Appropriate Design Code (the ‘Children’s Code’) rather than the OSA, but the two regimes are coordinated: the ICO has openly admitted that it works in partnership with Ofcom, as the ICO stated in its December 2025 children’s privacy progress update, “to ensure efforts are coordinated”. The fine is legally distinct from OSA enforcement but functionally complementary to it: where Ofcom targets platforms’ content-governance duties, the ICO targets their data-governance failures, and the same underlying conduct of allowing age-unverified users to access content triggers liability under both regimes simultaneously. The ICO is now conducting a broader review of at least 17 platforms popular with children in the UK, including Discord, Pinterest and X. Reddit’s objection also surfaces another contradiction the ICO has not resolved: the age verification it effectively mandates creates a permanent record linking users’ legal identities to their platform activity, held by third-party age verification processors entirely outside the platforms’ own systems, and the data practices of those processors are, as the ICO’s own enforcement demonstrates, largely beyond the regulator’s concern.

The contrast between the ICO’s vigour against American social media platforms and its passivity toward British police forces is, on its face, a study in selective enforcement. The same week that John Edwards announced the £14.47 million Reddit fine and spoke at the IAPP UK Intensive, the story of Alvi Choudhury was making national television. Choudhury, a 26 year-old British Bangladeshi software engineer, had been arrested at his home in Southampton in January 2026 by Thames Valley Police, who suspected him of committing a £3,000 burglary in Milton Keynes: a city he has never visited, 100 miles away. The arrest was triggered by a retrospective facial recognition match against Cognitec software that runs 25,000 searches per month against approximately 19 million custody photographs held on the Police National Database. Choudhury was held in custody for nearly 10 hours before officers examined the alibi evidence he had been offering since his arrest. When he eventually saw the CCTV footage that had identified him, he told the Guardian the suspect looked approximately 10 years younger, with lighter skin, a bigger nose, no facial hair and different eyes and lips. His own mugshot had been on the police system in the first place only because he was wrongly arrested in 2021 after being the victim of an assault; his DNA was subsequently deleted, but his custody photograph was not. Thames Valley Police’s response was, on its own account, revealing. The force acknowledged the arrest “may have been the result of bias within facial recognition technology”, but an officer told Choudhury that “as the use of facial recognition is already subject to review at a strategic level”, he did not feel the need to raise the matter for wider organisational learning. The force’s public statement went further, reframing the failure entirely: the arrest, it said, was based on the investigating officer’s own visual assessment after the algorithmic match, and therefore “was not influenced by racial profiling”. The position that a human officer confirming a racially biased algorithmic result absolves the institution of responsibility for racial bias merits no extended comment. This is not an isolated incident. In January 2026, another force paid damages to a black man wrongly arrested using the same technology. Home Office research, suppressed until December 2025 when it was published deep within a consultation document by Liberty Investigates, found that the algorithm generates false positive matches at a rate of 5.5% for Black faces and 4.0% for Asian faces, compared with 0.04% for white faces: a disparity of more than 100 to one.

When Edwards took the stage, he explained the ICO’s enforcement philosophy: the regulator must “very deliberately choose our focus”, concentrating on “AI and biometrics, children’s privacy and online tracking”. Police facial recognition involves all three. But the ICO has conducted audits, expressed concern through its Deputy Commissioner, and asked the Home Office for “urgent clarity” and stopped there. The Equality and Human Rights Commission has been more forthright: it was granted permission in August 2025 to intervene in a judicial review of the Metropolitan Police’s live facial recognition programme, arguing the deployments are unlawful for want of a clear legal basis. A comment made at the time about the ICO’s posture proved apt: the regulator had “stressed the need for FRT deployment with appropriate safeguards” while sitting “on the fence” as others sought judicial determination of whether current use is “strictly necessary”. The juxtaposition is instructive. The regulator charged with protecting personal data finds £14 million worth of urgency in Reddit’s failure to age-verify its users, and no comparable urgency in a biometric surveillance system that its own deputy has called “disappointing”, that the government’s own research shows discriminates against minorities by a factor exceeding 100, and that has produced wrongful arrests of racial minorities on the basis of a technology the operating force itself concedes may be racially biased. The filter, as always, catches what the filter is not intentionally designed to catch.

All of this would be a domestic British problem if the OSA’s reach were confined to British soil. It is not. Section 3 of the OSA applies to any service with “links with the United Kingdom”, which Ofcom has interpreted to include any platform with a significant UK user base regardless of where it is domiciled, incorporated or operated. In March 2025, Ofcom wrote to 4chan Community Support LLC, a Delaware LLC with no offices, staff or assets outside the United States, to inform it that it was a regulated service because approximately 7% of its traffic came from UK IP addresses and must therefore provide information regarding its illegal content risk assessment and its qualifying worldwide revenue. 4chan refused to respond to either request. In October, Ofcom issued escalating demands, investigations and a £20,000 fine plus a penalty of £100 per day for up to 60 days for non-compliance with information requests, all served by email to US addresses. 4chan again refused to pay. In August 2025, 4chan and Kiwi Farms (Lolcow LLC) filed a federal lawsuit against Ofcom in the District of Columbia, alleging violations of the First, Fourth and Fifth Amendments, pre-emption by Section 230 of the Communications Decency Act and conflict with the SPEECH Act. Ofcom responded by asserting sovereign immunity under the Foreign Sovereign Immunities Act, claiming both the right to issue binding censorship orders to Americans on American soil and immunity from any American legal response.

Ofcom’s enforcement action against 4chan did not end with the October 2025 information-gathering fine. On February 12th 2026, Ofcom issued a second Provisional Decision against 4chan, proposing both a single penalty and a daily rate penalty for contraventions of sections 9, 10, and 12 of the OSA: its substantive duties to conduct a suitable illegal content risk assessment, to set out adequate user protections in its terms of service, and to implement age verification to prevent children from encountering explicit content. Counsel for 4chan, Preston Byrne, replied the same day: “Increasing the size of a censorship fine does not cure its legal invalidity in the United States.” The deadline for representations having passed without compliance, Ofcom confirmed on February 27th that it was referring the matter to a final decision maker under its Online Safety Enforcement Guidelines. The progression is systematic: from information requests under section 100, to a confirmation decision imposing penalties, to a second provisional decision targeting the Act’s substantive content-safety and age-verification duties. Each escalatory step expands the scope of demanded compliance and raises the potential penalty exposure. For an anonymous platform operating exclusively in the United States, age verification for an anonymous imageboard is not a technical requirement: it is an existential one.

The domestic British appeals framework for these decisions is itself still being constructed. On February 26th 2026, the Tribunal Procedure Committee (TPC) opened a consultation on amending the Upper Tribunal Procedure Rules to accommodate the new rights of appeal created by the OSA. Under section 168 of the Act, any person with a sufficient interest may challenge Ofcom’s confirmation decisions, penalty notices and technology notices before the Upper Tribunal. The TPC provisionally proposes a three-month window for permission-to-appeal applications by interested persons who are not the direct recipients of an Ofcom notice, departing from Ofcom’s own preference for one month. On costs, the TPC agrees with Ofcom’s proposal to displace the usual no-costs rule, recognising that the tribunal should have broader discretion to award costs in OSA cases given the likely complexity and evidence-heavy nature of such appeals, and that the existing rule would leave Ofcom unable to recover costs even where it successfully defends a decision. Ofcom is a regulator with the power to fine companies hundreds of millions of pounds, funded by fees levied on the very industry it regulates, and it is now asking for the right to make anyone who challenges it in court pay Ofcom’s legal bills if they lose. The consultation closes May 21st 2026.

This structural asymmetry is what the GRANITE Act directly addresses. Conceptualised by Byrne and introduced in the Wyoming Legislature as HB 70, the ‘Guaranteeing Rights Against Novel International Tyranny and Extortion Act’ passed the Wyoming House of Representatives 46-12 on February 23rd 2026. It strips foreign sovereigns of immunity in US state courts when they attempt to enforce censorship orders against US persons and creates a private right of action with minimum statutory damages of $1 million per violation, or 10% of the defendant’s annual US-related revenue, whichever is greater. It also prevents Wyoming courts from recognising any foreign judgment that infringes constitutionally protected speech, extending the model of the SPEECH Act (28 U.S.C. §§ 4101-4105) from defamation to the full range of First Amendment-protected expression. If censoring an American exposes a foreign regulator to a sufficiently significant civil judgment, the cost-benefit calculation changes dramatically.

A separate American legal theory operates through the Sherman Act and does not depend on overcoming FSIA immunity at all. Ofcom’s sovereign immunity defence may insulate the regulator itself from direct suit, but it extends no protection to the private actors who shaped the OSA’s regulatory design. The OSA imposes identical nominal obligations on all regulated services, but its fixed compliance costs fall proportionally far harder on smaller platforms than on large incumbents with existing legal, technical and compliance teams that can simply be redirected to satisfy new requirements: a pattern antitrust economists describe as raising rivals’ costs. For example, where well-resourced incumbents privately coordinated with regulators to embed compliance standards they could more easily satisfy than their rivals, the resulting framework may reflect competitive preferences rather than independent regulatory judgement. Under Continental Ore Co. v. Union Carbide & Carbon Corp., routing an anticompetitive scheme through a foreign governmental apparatus does not immunise the private actors who designed it. The Noerr-Pennington doctrine, which ordinarily protects petitioning activity, rests on First Amendment foundations that protect the right to petition American government; the stronger legal argument is that it does not extend to petitioning of foreign regulators. Where the factual record supports coordination beyond ordinary advocacy, Sections 1 and 2 of the Sherman Act remain available tools even where the regulatory mechanism is British.

If you care about children’s mental health and safety online, there are three new bills in Congress that are worth knowing about: the SAFE Act, the ECCHO Act and the Stop Sextortion Act (collectively known as the James T. Woods Act). Together they address real, documented harm in ways that KOSA and the UK’s Online Safety Act, simply do not. The package addresses three documented gaps in federal law. The SAFE Act repeals outdated CSAM sentencing provisions and directs the US Sentencing Commission to develop updated guidelines reflecting modern patterns of dangerous conduct. Right now, federal sentencing rules are outdated and largely ignored: fewer than one in three cases are sentenced within the existing guidelines. This bill would clear the way for the US Sentencing Commission to write new, updated rules that reflect how online abuse works today. The ECCHO Act creates a new federal crime targeting networks, most notoriously Network 764, that use online group chats to coerce emotionally vulnerable children into self-harm, suicide and violence, with penalties up to life imprisonment where a victim dies or attempts suicide. The Stop Sextortion Act explicitly criminalises sextortion for the first time under federal law, responding to a 33% rise in financially motivated cases in 2024 and more than 40 child deaths linked to these schemes. Unlike KOSA or OSA, the James T. Woods Act does not try to police what people say online. They target what predators do: coercion, blackmail and the deliberate manipulation of children into harm. That is a meaningful distinction, and it is why this package has earned support from more than two dozen organizations across the political spectrum, including the FBI Agents Association, RAINN, the National District Attorneys Association, the National Centre for Missing and Exploited Children and Thorn.

The moral case against both the OSA and KOSA is not that children’s wellbeing is unimportant. It is that suppressing protected speech is both the wrong instrument and a dangerous one. The wrong instrument because the science does not establish that social media causes the harms these laws address, and because the content filters that implement these regimes cannot distinguish beneficial from harmful speech. A dangerous one because the same mechanism that blocks, for example, pro-anorexia posts will also block access to eating disorder recovery communities; the same filter that catches self-harm instructions will catch trans youth support forums; and the same regulator empowered to define ‘harmful’ content today may be led by someone with very different ideas about what speech is harmful tomorrow. Above all, it is dangerous because the machinery of protection, once built, does not confine itself to its original target: Japanese Americans were interned after Pearl Harbour; Muslims were surveilled, infiltrated and placed on no-fly lists after September 11th, some rendered to CIA black sites abroad and others tortured at Guantanamo Bay without charge or trial; McCarthyite loyalty boards destroyed careers on the basis that association predicted subversion; and the FBI’s COINTELPRO program turned the apparatus of domestic security against the civil rights movement, monitoring Martin Luther King Jr. as a threat to national security on the pretext of alleged communist infiltration. In each case, the instrument was constructed in good faith to address a genuine fear; in each case the stated rationale was correlation dressed as causation; and in each case the same institutional machinery, once normalised, was available for use against the next group a future administration found threatening.

Ofcom’s attempt to extend this regime to American soil raises the stakes further. It asserts, in effect, that British regulators may determine what Americans are permitted to say on the American internet and that American law has no recourse. That is not a tenable position under the First Amendment, under any established principles of international jurisdiction or under any defensible conception of democratic self-governance. The GRANITE Act is the beginning of the American legal system’s answer.

A brief postscript. I recently sent a prior version of this article to a member of the House of Lords who had asked to read it. Parliament’s email filter blocked it. Repeatedly. The peer could not open the attachment because the system flagged it as suspicious. The article, with working title ‘What the Filter Catches’, was itself caught by a filter. I could not have asked for a better illustration of the argument. Sometimes the world just does the work for you.

Note: The author has submitted Freedom of Information requests to the US Department of State, the Department of Justice, the National Security Council, the Federal Bureau of Investigation, the Federal Trade Commission, the UK ICO as well as Ofcom itself seeking documents relating to Ofcom’s extraterritorial enforcement strategy. Those requests remain pending.

Daniel was kicked out of Warwick Business School after proposing a civil rights organisation as a business project. He is a member of the ACLU and the Free Speech Union.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
5 6 votes
Article Rating
Subscribe
Notify of
68 Comments
Inline Feedbacks
View all comments
March 6, 2026 12:12 am

“Most data are correlative”

an interesting comment buried in one of the “reasons not to…”

it implies some data are causative

and from observing the changes in behaviour in my own kids and grandkids, I would say that limiting “screen time” directionally is a Good Thing…

as is removing potentially harmful content – turning it around …why would we NOT want to limit harmful content to young minds…?

Reply to  Hysteria
March 6, 2026 7:39 am

why would we NOT want to limit harmful content to young minds…?

The bigger question is why are the parents not limiting it?

Do you really want the GOVERNMENT determining what content is harmful and what isn’t?

Reply to  Tony_G
March 6, 2026 10:22 am

Too many parents are barely qualified to have black labs let alone children.

Sparta Nova 4
Reply to  Nicholas Schroeder
March 6, 2026 12:09 pm

So that justifies schools doing anything they want with your kids and not telling you?

Reply to  Hysteria
March 6, 2026 8:08 am

If you allow censorship for “kids”, there is nothing stopping censorship for “adults” either. A very slippery slope allowing politicians to decide what you can see/hear. Think Orwell.

Some phone providers already let the account holder monitor and control when the phone is used. Give that to parents and let them control it. From working in a high school, things like porn is not the issue. Screen time is the issue. Some kids will spend all day with their phone six inches from their face and disregard anything going on around them. Turn the damn phones off.

KevinM
Reply to  Jim Gorman
March 6, 2026 3:54 pm

It’s technically easy to make a school a cellular coverage dead zone… then there are the teachers and THEIR phones to consider.

Reply to  KevinM
March 7, 2026 10:38 am

The problem isn’t totally cell phones, it is also wifi. Out school does block numerous sites but man, its tough keeping up. Plus, if research is done via the internet, you crossover into problems.

MarkW
Reply to  Hysteria
March 6, 2026 8:26 am

The issue is identifying data that is harmful and differentiating it from data that is not harmful.

Reply to  MarkW
March 6, 2026 8:56 am

Governments have proven countless times that they very poor at differentiating harmful from harmless speech.

Art Slartibartfast
March 6, 2026 12:33 am

The primary responsibility in all of this lies with the parents. They should educate their kids on responsible internet use. This should be supported by schools, with proper education on navigating the digital world.

strativarius
Reply to  Art Slartibartfast
March 6, 2026 12:43 am

The state thinks schools should instil approved values, parents are treated with suspicion.

starzmom
Reply to  Art Slartibartfast
March 6, 2026 6:08 am

Yes, the primary responsibility lies with the parents. They not only should educate their children on the responsible use of the internet, they should limit their access to the internet, by limiting computer and smart phone use. School must be part of this picture, by limiting internet use as well.

But we are at a point where many if not most textbooks are available on the internet and schools issue laptops and tablets to their students. Students have unlimited access to the internet all the time, and likely know more about accessing whatever they want than their teachers or parents.

What could possibly go wrong?

PS They can’t write either–I mean they can’t hold a pen or pencil and form letters with it. Often even their own names.

John Hultquist
Reply to  starzmom
March 6, 2026 8:02 am

 Us old folks are amused, surprised, and concerned when a note from a young person arrives with only printed letters. In about 6th or 7th grade we did a sample writing that was evaluated someplace outside the school system. A poor result meant practicing more and resubmitting.  

starzmom
Reply to  John Hultquist
March 6, 2026 11:13 am

As an attorney I often had young clients–16 years old or so–on some sort of driving infraction. It was the rare kid whose signature on the paperwork did not look like it was written–printed–by a 6 year old. I even wrote one of guidance counselors (from my kids’ school) to complain about this problem. She agreed with me, but said basically there was nothing they could do in the high school.

I would also say you are lucky to get an actual note–we get emails.

observa
Reply to  Art Slartibartfast
March 6, 2026 4:18 pm

Parents being plural here meaning a biological man and a woman with their complementary skills if not always complimentary. We’ve just experienced the unique Great Feminisation of the West and with it conspicuous emotional empathy for everyone’s feelings and triggered emotions but the counter-revolution has begun-
Have young women really turned into jobless layabouts?
Toughen up princesses and pronouns as a generation of young men are fed up with your BS crying in cars on Tiktok ‘where have all the good men gone’ when they hit the biological wall in their 30s. A generation that can’t even reproduce itself building a life and family together is doomed to be overtaken and captive by any culture that can. You’d better start respecting what Western men have done and are doing for you or you’ll get some Metoo alright being the weaker sex. Chanting river to the sea and globalise the intifada?? Just ask Persian and Afghan women about that.

observa
Reply to  observa
March 6, 2026 4:28 pm

PS: Which side are you on Tiktokers?
Donald Trump’s ‘entitled’ granddaughter Kai dragged for taking selfie in front of war machines: ‘She’s tone deaf’
Guess who’s already shut down the internet and shooting those they find with Starlink receivers?

observa
Reply to  observa
March 6, 2026 4:57 pm

PPS: I’ll leave it to a woman to explain it to those who ironically worship another old dead bearded white guy name of Karl-
The left has ‘conveniently forgotten’ the atrocities Iranian regime has committed
As for the pronoun at the end what can I say?

Forrest Gardener
March 6, 2026 12:33 am

And this is where the globalist wet dream collapses unless those targeted agree to domination from outside their borders.

strativarius
March 6, 2026 12:35 am

There has never been a more embarrassing time for any Englishman or Englishwoman.

Starmer is a gutless, spineless, gormless, direction-less, neurotic, underachieving, snivelling, cowardly third rate lawyer.

Reply to  strativarius
March 6, 2026 1:02 am

He should run for governor of California. He would fit right in.

Reply to  strativarius
March 6, 2026 3:04 am

He’s not even that good.

2hotel9
Reply to  strativarius
March 6, 2026 5:48 am

Strat? You are giving Starmer entirely too much credit. 😉

john cheshire
Reply to  strativarius
March 6, 2026 5:58 am

I’ll add he hates his own country and has admitted that he’d rather do business with the WEF than with the elected MPs in our Parliament. He is a traitor and he and his cronies should be arrested and prosecuted for maladministration and treason.

Reply to  strativarius
March 6, 2026 7:48 am

get off the fence, Strat, and tell us what you really think

March 6, 2026 1:49 am

Just ban smart phones for youngsters.

They can be given traditional brick phones.

Reply to  Steve Richards
March 6, 2026 3:47 am

Just ban youngsters

2hotel9
Reply to  Steve Richards
March 6, 2026 5:49 am

Better course of action would be to ban smart phones for their mothers, internet access too.

March 6, 2026 3:46 am

Pales into insignificance beside the insane California ruling* that all operating systems must come with a means of limiting children’s access.

Since Linux and its derivatives are not supplied by corporate entities, who you gonna sue? Ghost Busters?

Reminds me of many years ago running SCO Unix, we wanted a POP mail server. I found the source online., But it required an encryption library to compile. This library was controlled by the US government as not to be exported.

However it’s source code was part of the free online Berkeley Unix source code so I downloaded and compiled it.

Governments want to control the Internet. It gets in the way of ‘these are facts; the rest is dangerous propaganda’

*California’s Digital Age Assurance Act (AB 1043), signed by Governor Gavin Newsom in October 2025, requires every operating system provider in California to collect age information from users at account setup and transmit that data to app developers via a real-time API, with the law taking effect on January 1, 2027.

2hotel9
Reply to  Leo Smith
March 6, 2026 5:51 am

There already is a limiting mechanism, turn it off. Oh, and already have to do that, people just lie.

KevinM
Reply to  2hotel9
March 6, 2026 4:03 pm

Computer: “When were you born”
Unmonitored 15 year old with foresight: “1.1.1901”

starzmom
Reply to  Leo Smith
March 6, 2026 6:09 am

Now that is an idea destined to fail spectacularly.

Reply to  Leo Smith
March 6, 2026 7:41 am

I am quite curious how they intend to enforce it with Linux?

Art Slartibartfast
Reply to  Tony_G
March 6, 2026 9:27 am

Linux distributions could simply state “Not for use in California”

March 6, 2026 5:30 am

How about Heartland Institute (that has absolutly no link to this site or the fossil fuel industry, right?) stops meddling with european politics?

https://www.theguardian.com/environment/2025/jan/22/us-thinktank-climate-science-deniers-working-with-rightwingers-in-eu-parliament-heartland-institute

2hotel9
Reply to  MyUsernameReloaded
March 6, 2026 5:52 am

Look! It’s peak lie spewer!

Reply to  MyUsernameReloaded
March 6, 2026 8:04 am

The Guardian?

That beacon of impartiality who wouldn’t dream of trying to interfere with US elections?

Oh, wait …

Operation Clark County 

The Guardian, a UK-based left-wing newspaper, tried to influence the U.S. presidential election by encouraging its readers to write personal letters to independent voters in Clark County, Ohio—a key swing county in a pivotal state.

  • Objective: To persuade undecided voters in Clark County to support Democratic candidate John Kerry over Republican George W. Bush, driven by widespread international criticism of Bush’s foreign policy. 
  • Execution: The project, launched in October 2004, invited readers to sign up online to receive the name and address of a registered independent voter in Clark County. Over 14,000 people signed up before the website was disabled by a hacker. 
  • Reactions: The campaign provoked strong backlash from American voters, many of whom viewed it as foreign interference. Responses included threats, mockery, and vitriol—some even claiming to have alerted the CIA or FBI. The Guardian published a selection under the headline “Dear Limey assholes.”

Yes, that Guardian.

MarkW
Reply to  Redge
March 6, 2026 8:32 am

You don’t understand, the laws that socialists propose were never intended to restrict the actions of socialists.

Reply to  MarkW
March 6, 2026 9:01 am

yea, my bad lol

MarkW
Reply to  MyUsernameReloaded
March 6, 2026 8:30 am

I love the way socialists want to ban everyone who disagrees with their desire for total control.

Reply to  MyUsernameReloaded
March 6, 2026 11:16 am

WELL DONE HEARTLAND 🙂

Bringing a modicum of climate realism to the EU and UK.

Reply to  MyUsernameReloaded
March 6, 2026 11:47 am

Does it have any links to this site, or to the fossil fuel industry? If so, what are they? This is a genuine question, not sarcasm. I know of only one, a grant of $700,000 from Exxon in total paid over a series of years to Heartland and ending in 2005, but I know of none since then. I also know of no links between Heartland and this site, or between this site and any fossil fuel companies,, and would be interested if you have any hard info on any.

Jeff Alberts
Reply to  michel
March 6, 2026 4:01 pm

But there are plenty of links to FF corps and research units, like CRU.

MarkW
Reply to  Jeff Alberts
March 6, 2026 6:19 pm

Please document said links. For once.

Sparta Nova 4
Reply to  MyUsernameReloaded
March 6, 2026 12:18 pm

There is no such thing as a “fossil fuel industry.”
Working together on issues is politics. Something wrong with politics?

My pet goldfish knows more climate science than anything anyone publishes in The Guardian.

Oh! Wait! I broke the rule: DON’T FEED THE TROLLS!

Reply to  MyUsernameReloaded
March 6, 2026 2:11 pm

I’ve just remembered, Kier Starmer urged America to vote for Kamila Harris in the last election

starzmom
Reply to  Redge
March 6, 2026 5:36 pm

That is sufficient to discount anything else he says.

2hotel9
March 6, 2026 5:46 am

Idiots have been trying to control what I say for decades, and failing miserably.

DonK31
Reply to  2hotel9
March 6, 2026 6:00 am

We gave up doing the things that the British government ordered us to do 250 years ago. I see no reason to restart.

George Thompson
Reply to  DonK31
March 6, 2026 11:42 am

Yeah…what part of 1776 don’t those clowns understand?

2hotel9
Reply to  DonK31
March 7, 2026 10:26 am

In the immortal(meme) words of Forrest Gump,”In 1776 the British wanted to take our guns. We shot them.” 😉

George Thompson
Reply to  2hotel9
March 6, 2026 11:45 am

Ditto…but don’t discuss that with the wife, otherwise very true.

March 6, 2026 7:14 am

Judging from the deafening silence in response to the 100’s of snail mails, foreign and domestic, sent out I wonder if they are not being delivered.
There appears to be a sizable contingent at WUWT just fine with USPS “filtering” my unqualified misinformation.

Sparta Nova 4
Reply to  Nicholas Schroeder
March 6, 2026 12:23 pm

Wong.
I will defend to the death your Constitutional right to prove you are an idiot.

You “know” they are not delivered because of the correlation with “deafening silence?”
Maybe the recipients did not think you were worth the powder.
How does this correlation prove the USPS is “filtering” your mail?

KevinM
Reply to  Nicholas Schroeder
March 6, 2026 4:12 pm

Who sent what to whom?

March 6, 2026 7:26 am

“the new generation of “online safety” laws”

Has anyone seen the latest out of California? They’re requiring operating system “providers” to add age verification. How’s that going to work out? And where’s the ACLU on this one?

MarkW
March 6, 2026 8:10 am

to prevent minors from encountering content associated with depression, anxiety, eating disorders, self-harm and suicide.”

Wouldn’t that require that children be protected from seeing any climate alarmist sites?

Sparta Nova 4
Reply to  MarkW
March 6, 2026 12:23 pm

100+

2hotel9
Reply to  MarkW
March 7, 2026 10:29 am

And 99% of popular “entertainment” on TV, cable, satellite and print. I, for one, am starting to see a LOT of up side to these ideas.

March 6, 2026 11:23 am

Of course the precedent was set in the reverse; USA trying to control the British media.

It was a friend of a paedophile who sued the BBC for a fortune because he didn’t approve of an edit on Panorama that never played in the USA.
But the President’s precedent was set. It was on the internet so ‘our laws apply’.

Following that logic from the head of state, Ofcom are going to win this case, easily.

MarkW
Reply to  MCourtney
March 6, 2026 6:23 pm

Suing for libel is the logical equivalent of censorship?
Are you really that desperate?

2hotel9
Reply to  MarkW
March 7, 2026 10:33 am

Desperate. Stupid. Moronic. Their all the same for leftards.

2hotel9
Reply to  MCourtney
March 7, 2026 10:32 am

And another contender for Peak Lie Spewer toddles in to vomit more stupidity. And I saw Panorama’s anti-American propaganda right here in the US of A. Media is global, you moron.

March 6, 2026 12:13 pm

Its obvious that this cannot work. The only possible remedy for Ofcom is to erect the Great British Firewall and try and isolate the UK from the Internet, They cannot control what is on the Internet. The most they could conceivably do is control what British people can see.

Although, as Iran and others have found, in the satellite era, doing this is a complicated affair. Even just turning off all the routers and closing access to the Internet from all your ISPs doesn’t really do it.

0perator
March 6, 2026 2:38 pm

The Islamic republic of Britain can jog right off. Bunch of wanker cockwombles.

2hotel9
Reply to  0perator
March 7, 2026 10:35 am

Cockwomble! I haven’t seen that in a coon’s age.

kramer
March 6, 2026 3:38 pm

This isn’t about the safety of children.

Its about leveraging concerns of children in order to pass laws that will make it easier to identify adults who do not posses the correct political views, or who are saying factual things that are detrimental to the global cause of socialism.

2hotel9
Reply to  kramer
March 7, 2026 10:36 am

“Can’t we think of the children!” Maud Flanders.

KevinM
March 6, 2026 3:50 pm

ACLU is (occasionally) a good example of how truth and principals can survive the onslaught of academic nonsense.