Tags: ai

1088

sparkline

Thursday, April 23rd, 2026

It’s Not AI. It’s FOMOnetization.

FOMO is a feeling. But it’s also a business model—and increasingly, one of the more successful ones. Fear, in general, makes people much easier to separate from their money. It’s perfectly suited to this moment of ubiquitous grift, where everything feels like a lottery ticket or a multi-level marketing scheme.

It’s even more perfectly suited for “the age of AI,” which squeezes economic FOMO from both sides. AI could make you wildly rich (the first person to start a billion-dollar company with zero employees!) or leave you hopelessly destitute (part of the looming “permanent underclass”). Which one do you want to be? Smash that like button, sign up for my online course, and use my new AI-powered business platform!

Summary punishment

In the latest issue of Matthias’s excellent Own Your Web series, he describes the recent betrayal by Google:

The search engine no longer says “here, go read what this person wrote.” It now says “here, I’ve already read it for you.” The contract is broken.

He’s absolutely right.

But…

Have you ever clicked on a result from a search engine? Unless you’re lucky enough to land on a nice personal website, you’re more than likely to be confronted with pop-ups to allow tracking, or a desparate plea to subscribe to a newsletter, or just rubbish ads all accompanied by a slow page loading somewhere in the mix.

Don’t get me wrong. I’m not saying that what Google is doing is okay. But let’s not pretend that everything indexed by Google is just fine and dandy for people to visit.

And of course the main reason why websites are so terrible is because they’ve tied their business model to heaps of behavioral advertising driven by invasive tracking courtesy of …Google.

This reminds me of AMP. Remember Google AMP? It was a terrible solution to a real problem. Web pages were (and still are) bloated and slow. The correct solution would be to encourage people to fix that, but instead Google mandated a proprietary format for your content that had to be hosted on their servers.

AMP was a disaster, both in practical terms and in the reputational damage it did to Google’s developer relations.

Now they’re doing it again, powerwashing away any goodwill they ever had with site owners. Now Google doesn’t even send search engine traffic to the websites that host the ads that Google encouraged people to put on every page.

It’s almost as if Google is a company so large and with so many competing interests that it now suffers from an incurable split personality disorder.

Personally I think they’re missing a trick. They should be using “AI” summaries as a stick.

If your site is slow, or filled with user-hostile annoyances then it should be cockblocked by a hallucinated summary. But a nice fast respectful website? Send the traffic their way! Everyone wins—users, site owners, Google, the World Wide Web.

Could you imagine how quickly this would revolutionise the world of search engine optimisation? They’ve always told us that we should make websites for humans in order to get good Google juice. This would be a way of making it come true, without any of the over-engineered woefulness of AMP.

It’ll never happen of course. But I can dream.

Tuesday, April 21st, 2026

Expansion artifacts || Matt Ström-Awn, designer-leader

Compression made the information age possible by stripping things down to fit the pipes. Expansion made the AI age possible by blowing data back up again. Both operations leave marks; we’ve learned to spot compression artifacts, but we’ve only just begun to reckon with expansion artifacts. Until we do, there’s a lot of risk to manage.

Thursday, April 16th, 2026

Threat models

People talk about the effectiveness (or lack thereof) of large language models as though all tasks are comparable. But it strikes me that there are three broad categories of work that large language models are applied to:

  1. Compression.
  2. Transformation.
  3. Expansion.

Compression is when you feed a large language model something big that you want to make small. Summarise this book. Give me the gist of this meeting. Large language models are generally pretty good at this, which makes sense given that they themselves are kind of like compressed artifacts.

Transformation is when large language models convert from one format into another. Turn this audio into text. Turn this jumble of data into structured JSON. A large language model can handle these tasks pretty well. There’ll probably be a few errors so make sure that’s not a deal-breaker.

Expansion is when you give a large language model a prompt to generate something from scratch. An image. A presentation. An email. A poem. This is where slop lives. The output inevitably betrays its origins, glistening with a sheen of mediocrity.

Laurie spotted this three-way split a while back:

Is what you’re doing taking a large amount of text and asking the LLM to convert it into a smaller amount of text? Then it’s probably going to be great at it. If you’re asking it to convert into a roughly equal amount of text it will be so-so. If you’re asking it to create more text than you gave it, forget about it.

I hope that when the bubble finally bursts, we’ll see the surviving large language models put to work on the first two categories. The boring stuff. The work that’s tedious for humans.

But tedious is as tedious does. Something I consider drudgery might be the very thing that gives you life. Like Giles says:

I have a feeling that everyone likes using AI tools to try doing someone else’s profession. They’re much less keen when someone else uses it for their profession.

The big exception seems to be programming. Apparently there are plenty of coders who never before expressed an interest in being managers who are now happily hanging up their coding spurs in favour being the overseer of non-human workers.

It’s a reasonable outlook. It could even be considered a user-centred approach. Users don’t care about the elegance of your code; they care about accomplishing their tasks.

Programming is something of an exception to the efficacy of large language models in general. Instead of relying on the subjectivity of painting, poetry, or prose, programming can be objectively tested. Throw enough money at the worst people in the world and they’ll give you tokens you can use to get the machines to test their own output. So you can get a large language model to create something reasonably good from scratch as long as that something is code.

If you had asked me about the threat model of large language models two years ago, I probably would’ve been worried for artists, writers, and musicians. I thought that software had enough inherent complexity to be relatively safe.

Now my opinion has completely reversed. Software is almost certainly the killer app for large language models.

I think the artists, writers, and musicians will be okay, or at least as okay as they ever were. It turns out that humans like things made by other humans.

And y’know what? If I had to choose which endeavour I’d rather see automated away—programming or art—it’s no competition.

Don’t get me wrong—it would be nice if everyone got paid for doing what they enjoy. It’s just that I’m okay with software engineers not being at the front of that line.

I remember when I first started getting paid money to make websites. “Really?” I thought, “Someone is willing to pay me to do something I’d do anyway?” I kept waiting for the jig to be up. Instead I saw my profession grow and expand.

Perhaps there’s a long-overdue compression happening.

Or maybe it’s more like a transformation.

Tuesday, April 14th, 2026

Design and Engineering, As One · Matthias Ott

A thoughtful piece by Matthias that’s a must-read for both designers and developers.

No-stack web development – David Bushell – Web Dev (UK)

A stack is also technical debt, non-transferable knowledge, accelerated obsolescence, and vendor lock-in. That means fragility and overall unnecessary complication. Popular stacks inevitably turn into cargo cults that build in spite of the web, not for it.

The web platform does not require build toolchains. Always default to, and regress to, the fundamentals of CSS, HTML, and JavaScript. Those core standards are the web stack.

Thursday, April 9th, 2026

The AI Great Leap Forward

In 1958, Mao ordered every village in China to produce steel. Farmers melted down their cooking pots in backyard furnaces and reported spectacular numbers. The steel was useless. The crops rotted. Thirty million people starved.

In 2026, every other company is having top down mandate on AI transformation.

Same energy.

Tuesday, April 7th, 2026

AI Might Be Our Best Shot At Taking Back The Open Web | Techdirt

Not sure I buy the argument here, though I do very much look forward to local language models getting better so we can ditch the predatory peddlars of today’s slop. But this trip down memory lane to the early web of the 1990s could’ve been describing my own experience:

But the thing I do remember was the first time I came across Derek Powazek’s Fray online magazine. It was the first time I had seen a website look beautiful. This was without CSS and without Javascript. I still remember quite clearly an “issue” of Fray that used frames to create some kind of “doors” you could slide open to reveal an article inside.

Fray was what made me want to make websites:

I distinctly remember sites like prehensile tales, 0sil8 and the inimitable Fray triggering something in my brain that made me realise what it was I wanted to do with my life.

Sunday, April 5th, 2026

I used AI. It worked. I hated it.: Taggart Tech

There’s a fundamental problem with these tools beyond the capacity of any deployment strategy to solve: the tool requires expertise to validate, but its use diminishes expertise and stunts its growth. How does one become an expert? There are no shortcuts; there is only continuous hard work and dedication. I was once told of writing, great writers learn how to break the rules in new and ingenious ways by first learning the rules.

But how is a new developer meant to learn the rules if their day-to-day work is nothing but the babysitting of models? How will they gain the hard-won experience that allows a human in the loop to be a useful safeguard?

These models alter cognition in ways deleterious to human prosperity. In other words, for as much output as they provide, they take something important from us.

Thursday, March 26th, 2026

The End : Focal Curve

I can’t remember the last time a blog post resonated with me this much.

Craig’s criteria on his job search:

  • One: fuck offices
  • Two: fuck AI
  • Three: fuck React

And his conclusion:

Fuck work

Monday, March 23rd, 2026

It feels like all my peers are experiencing Deep Blue and having to choose their future career path:

expert in a dying field

or

collaborator in a fascist project.

Saturday, March 21st, 2026

Flood fill vs. the magic circle

Eleven years ago, I wrote:

Sometimes I consider the explosive growth of computation and think that strong AI is a near-term inevitability.

Then I remember printers.

That was just a brainfart, but Robin tackles it seriously in his thoughtful essay.

A pleasing image: if indeed AI automation does not flood fill the physical world, it will be because the humble paper jam stood in its way.

Software cannot, in fact, eat this world. Software can reflect it; encroach upon it; more than anything, distract us from it. But the real physical world is indigestible.

Wednesday, March 18th, 2026

Working with agents doesn’t feel like flow — Bill de hÓra

Related to Matt’s thoughts:

…working with agents feels much less like classic deep work, and much more like playing a game. Not to say the work is frivolous—it’s just because it feels like I’m in a game loop.

Flow, at least in the usual sense for me, feels smooth and continuous. The work and your attention starts to line up so cleanly that the experience becomes frictionless. You disappear into the work and meld with it. One notable aspect of flow has been I lose track of time. Working with agents on the other hand, is not like that at all. It’s highly engaging, but in a more jagged, reactive way. I’m focused, but not settled. I’m absorbed, but not merged with the task. I’m paying close attention the whole time, but the attention is dynamic and tactical rather than continuous. I don’t lose track of time at all.

The Last Quiet Thing | Terry Godier

Most of your screen time isn’t leisure. It isn’t addiction. It isn’t even a choice.

It’s maintenance.

Tuesday, March 17th, 2026

A Fisherman Of The Inland Sea by Ursula K. Le Guin

When I was summing up my reading habits in 2022 I said:

I think the lesson this year is: you can’t go wrong with Octavia E. Butler or Ursula K. Le Guin.

I stand by that. But maybe I’d recommend some Ursula K. Le Guin books more than others.

A Fisherman Of The Inland Sea is a good collection of short stories. But it’s not a great collection of short stories. If you’re looking for a great collection of short stories, read The Unreal and the Real.

When it comes to Ursula K. Le Guin, the standard is always going to be high so even when the stories aren’t her best, they’re still better than the output of most other sci-fi writers.

My slight disappointment with A Fisherman Of The Inland Sea isn’t so much with the stories themselves but with the collection.

To begin with, there are four unconnected short stories. That’s fine. It’s a short story collection after all.

But then after that there are three interconnected short stories from the Hainish cycle. They’re the best part of this book. That just makes the preceding stories look like filler.

If those three stories had been released as little collection, it would be a miniature classic. As it stands, you get more of a mixed bag.

But still, it’s worth reading this collection for those three stories alone.

Buy this book

Gas Town and Bullet Hell – Petafloptimism

Matt has some smart reckons on the relationship between time and technology:

The factory bell, the railway timetable, the telegraph wire, the always-on smartphone — each imposed a new temporal discipline, each produced its own characteristic form of exhaustion, and each was eventually (partially, imperfectly) domesticated through a combination of regulation, design, and collective action.

Monday, March 16th, 2026

Stop Sloppypasta: Don’t paste raw LLM output at people

slop·py·pas·ta n. Verbatim LLM output copy-pasted at someone, unread, unrefined, and unrequested. From slop (low-quality AI-generated content) + copypasta (text copied and pasted, often as a meme, without critical thought). It is considered rude because it asks the recipient to do work the sender did not bother to do themselves.

Thursday, March 12th, 2026

Generative AI vegetarianism | Sean Boots

Generative AI vegetarianism, simply put, is avoiding generative AI tools as much as you can in your day-to-day life.

Wednesday, March 11th, 2026

I work, I think? - Annotated

This is about something that’s already happening, that doesn’t show up in employment figures: the quiet destruction of the feedback loop that turns inexperienced people into competent ones. The process by which you get something wrong, feel it, understand why, and become slightly less wrong next time. It’s unglamorous and it’s slow and it’s the only way it’s ever worked.

AI short-circuits that learning completely. Not maliciously. Just structurally. When you can generate something that looks right without doing the thinking, you will (most people, most people being me, will, most of the time, under pressure, with a deadline) and the muscle that thinking would have built never develops.

your ai slop bores me

Mutually assured Mechanical Turk.

This is genuinely much more interesting and wholesome than a chat interface powered by a large language model.