tech / language / power / AI / media

The style section of power

Euphemism isn't imprecision. It's the whole point.

There’s a genre of sentence I’ve been collecting. You’ve seen it. You’ve read it in a press release, a shareholder letter, a university statement, a parliamentary amendment, and thought: something is wrong with this sentence, but I can’t quite say what.

Here’s one: Google, in November 2025, settled a $500 million class-action suit over YouTube data used to train Gemini AI. Their statement described this as the resolution of concerns around “collaborative data partnerships.” Not unauthorized extraction. Not taking things that weren’t theirs. Collaborative. As if the $500 million was just a fee for a service everyone agreed to.

Here’s another: Amazon, in September 2024, announced a “pivot to upskilling” that reclassified 15,000 warehouse automation layoffs as “career transition programs.” The internal memos said “efficiency optimization.” The workers said they lost their jobs. One of those descriptions is accurate.

And here’s the one that’s been sitting with me: in February, the UK dropped “AI content theft” language from an Online Safety Bill amendment and replaced it with “permissible data flows for algorithmic advancement.” DeepMind lobbied for this. Parliament obliged. The words disappeared from the law and the thing the words described continued, legally, because the law no longer had a name for it.

This is what I mean when I say the style section of political journalism covers for power. The aesthetics of responsible governance, of measured corporate communication, of careful institutional language, all of it doing the same work: making it harder to say what actually happened.

the vocabulary of permission

I posted something a few days ago about “inevitability” as a word, specifically about universities using it to explain compliance decisions they’d already made. The framing I was reaching for is this: inevitability is not a forecast. It’s a permission slip. When an institution tells you something is inevitable, they are not reporting on the future. They are telling you the argument is over, that questions are now merely emotional responses to facts, and that your job is adjustment rather than objection.

Tesla called its November 2025 elimination of 4,000 jobs “headcount optimization for autonomy scaling” and described the whole thing as an “inevitable robotics transition” in Musk’s own words. Microsoft cut 10,000 jobs in January 2025 and the earnings call called it “strategic alignment for cloud-AI synergy.” The FTC was already probing them at the time. The word “inevitable” appears nowhere in that transcript but the logic is identical: this is the direction of history, and the direction of history cannot be litigated.

The counterargument I keep running into goes something like: this language isn’t obfuscation, it’s precision. Complex technical and legal domains require careful vocabulary. When AI companies talk about “inevitability” they’re reflecting genuine technological momentum, real forecasts, peer-reviewed projections. Nature published research in March 2025 predicting 10x compute scaling by 2027 regardless of policy. The trajectory is real. The language is just describing it.

Fine. Except.

The Business Council of Australia lobbied to make AI training on copyrighted work legal. When the lobbying effort leaked, they dropped it. The Australian Competition and Consumer Commission fined them $15 million in December 2024 for misleading lobbying on AI copyright reforms, where they’d used “innovation enablers” instead of “copyright circumvention.” They knew what they were doing. They just wanted the law to say they weren’t doing it. That’s not precision. That’s the opposite of precision. That’s vocabulary as legal infrastructure, words chosen specifically because they don’t trigger the protections that accurate words would.

The Moore’s Law argument would be more persuasive if the companies making it weren’t simultaneously lobbying to change the definitions of theft.

$44 billion and the word “misled”

There’s a particular softness that legal language applies to rich people’s crimes. I keep thinking about the Twitter acquisition: $44 billion, shareholders misled, the deal closed anyway. “Misled” is doing a lot of work in that sentence. It implies something almost accidental, a gap between what was communicated and what was true, a failure of information rather than a decision to lie. The accurate word, in most uses of “misled” in financial journalism, is fraud. The difference between those two words is not semantic. It’s the difference between a settlement and a conviction, between a fine and a prison sentence, between a story that ends with “the company paid $X million” and one that ends differently.

Meta’s Q2 2025 SEC filings used “regulatory uncertainties” to describe $2.8 billion in undisclosed costs from AI training data litigation. Not lawsuits. Not the cost of having taken things that belonged to other people. Uncertainties, as if the problem were epistemic rather than legal, as if the question were still open rather than already in court.

OpenAI’s September 2025 transparency report described their GPT-5 training data, over a trillion tokens, 70% web-scraped, as “publicly available knowledge for broad societal benefit.” This is the philanthropic register applied to a for-profit company’s proprietary asset. The knowledge was publicly available in the sense that it was accessible. It was not publicly available in the sense that it was free for anyone to take and turn into a product. Those are different things. The language collapses the difference on purpose.

what the style section does

Political journalism has a style section. It’s not called that. It’s called analysis, or context, or the explainer format, or the long read. But its function is aesthetic: to take a thing that happened and render it in language that makes the thing feel considered, complex, not quite what you thought it was. To slow down the reader’s moral reflex. To replace the quick accurate read (they lied, they stole, they fired people and called it innovation) with a longer, more textured read that by the end has you wondering whether the quick read was too simple.

This isn’t always wrong. Sometimes the quick read is too simple.

The tell is what the complexity is in service of. When EU AI Act enforcement started in January 2026, requiring high-risk AI systems to disclose training data sources, Anthropic and others lobbied using “compliance burdens stifle innovation” as the frame. They got a three-month grace period. The complexity, in that case, was in service of delay. The careful language about innovation and regulatory uncertainty produced a concrete outcome: three more months before anyone had to say where they got their data.

The Pentagon rebranded its AI drone surveillance program as “autonomous capability enhancement” in March 2025 budget requests. Five hundred units. Privacy violation claims already filed. The language didn’t change the program. It changed what questions the budget committee would ask about it.

California signed the AI Data Rights Act on March 5th, mandating opt-out for training data. Opponents called it a threat to “U.S. competitiveness.” Not a protection for people whose work was taken. A threat to competitiveness. The style section of power takes your right and reframes it as someone else’s obstacle.

I keep coming back to the same question. Not “why do powerful institutions use euphemism,” because that’s obvious: it costs them nothing and protects them from everything. The question I’m actually sitting with is what happens to the people whose job is to translate this language back into plain speech, and why there is so little appetite for that translation in the places that could make it matter.

The sentence that described Google’s $500 million settlement as a “collaborative data partnership” resolution was written by a person. Probably a smart person. Possibly a person who went into communications because they believed in the power of clear language. Somewhere between that belief and that sentence, something happened.

The something is that clarity is only valued when it serves the institution. When it doesn’t, what gets valued is the appearance of clarity: sentences that feel transparent, that have the cadence of honest speech, that use words like “partnership” and “transition” and “alignment” because those words sound like things people agreed to.

They’re not describing agreements. They’re manufacturing them, retroactively, in the only place it’s still easy to do that.

In language.