Tim Dobbs Tim Dobbs

Buy the book #1: Four Futures

The first of a lot of chances to revisit some good reads.

Posting seems a good way to make myself (re)read some good books. I’m starting with Peter Frase’s Four Futures (2016). In it, Frase (Jacobin magazine editor and ‘lapsed sociologist’) conducts an illuminating thought experiment: how might automation and climate crisis play out in the context of an already extremely unequal world?

(Note the phrase ‘thought experiement’: this isn’t about prediction; the core premise of the first two futures is that - rather like ‘replicators’ on the USS Enterprise - automation will mean abundance. But abundance on whose terms?)

Frase’s first scenario might feel familiar to readers of Iain M Banks’s Culture novels or viewers of Black Mirror. In a ‘communist’ world of equality and abundance, what will succeed work and money as fuel for purpose and meaning? Time, in a phrase that’s been stuck in my head ever since, to “let a hundred status hierarchies bloom.”

This future won’t much resemble actual communist societies of the twentieth century or distant past; it will look much more like the post-/late-capitalist world of Nosedive and the rest (not to mention our current reality of attention, influence and ‘sovereign individuals’ / rampant egomaniacs). The basic human impulses - around status, sex, violence, art, religion and control - will abide (just as today’s landscape of egomaniacs, patronage and terror recalls everything from the Victorians to the Vikings).

Scenario two is also eerily resonant: a future where an elite controls access to technology and content; taxes people with exposure to advertising; and creates human employment to serve SaaEmpire? That feels a lot like (Fifteen Million Merits and) our enshittified present.

In the second half of the book, Frase switches one variable: in a world of scarcity, should we buckle up for Mad Max, or settle for something more like Silo? (Except his final, worst-case scenario is more Mad Max-meets-Elysium-or-Fallout: surviving the climate crisis is ruthlessly fought for and defended by elites, leaving the rest of us/you to fight among themselves.)

One ray of hope that we might arrive at future three - a ramshackle but roughly egalitarian world of limited resources - comes via a moment in a different book, Kim Stanley Robinson’s Ministry for the Future. One narrative fragment, set amid a city in crisis, recalls: “…people hung together with those they knew, for sure, to go get water. Possibly to protect each other from the crazies, if someone lost it or whatever. But that seldom happened. We were so afraid that we behaved well, that was how bad it was… fuck every idiot who thinks [there’s no such thing as society]. I can take them to a place where they will eat those words or die of thirst. Because when the taps run dry, society becomes very real.”

Read More
Tim Dobbs Tim Dobbs

Reminder: the progress myth

Always good to remind ourselves not to slip into the casual view of evolution and history that assumes the four-steps-forward-two-back march towards something better. Technology and medicine improve; but we’re still likely to use new knowledge to bomb the hospitals we build. There’s nothing inherent in technology or time that promises progress: it’s only possible if we make it - and if we keep it.

Read More
Tim Dobbs Tim Dobbs

Rocket Science?

Right stuff, wrong move. On trust and machines

Got to love your local libraries. I stumbled on a digital version of This is The Behavioral Scientist (2018), which includes an interesting article by Jason Collins about the balance of power when humans and machines make big decisions together. His focus is on the early US space programme (Project Mercury), as a jumping-off point for a wider debate around trust and automation.

On the one hand, as Collins points out, automation takes a little time to become accepted: now we operate our own lifts (although we prefer pressing the buttons ourselves); we scan our own groceries too (which felt very alien at first). And, just like those early test pilots, we often need the illusion of control (a need met in other contexts - as Rory Sutherland would note - by ‘placebo’ interactions, or the need to keep travellers who are waiting for their luggage at an airport walking somewhere - anywhere - to reduce frustration).

Given enough time and smart design, where are the limits to that trust? David Rock’s SCARF model isn’t a bad place to start. It’s clear how Status (test pilot ego in this case) caused a few near-disasters for Project Mercury. You can see how Fairness scuppered (rightly so) the attempt to mark Covid-era students algorithmically.

But perhaps the secret weapon of this new era of automation lies in Relatedness: the fact that we will be able to part-build, customise, and establish some kind of relationship with (apparently) intelligent machines. What happens when we trust what we built too much, because it’s ours?

Jason’s blog is here and - if it’s not available in your library - you can track down copies of This is the Behavioral Scientist on their website. Post image is a quick nod to that Right Stuff still via Midjourney.

Read More
Tim Dobbs Tim Dobbs

Captains and co-pilots

Words matter and I wanted to pause a minute to observe what we might have lost already in the narrative framing around AI (by which I mean transforming assistants in particular). Let’s start with that term: assistant.

There’s an implied desire for convenience on our part and an implied subservience on its. None of those things might pertain over time, so of course it’s distorting (and, yes, necessarily imperfect like any piece of language).

But - as with a co-pilot - might there be potential in these ideas to actually drive real safety for users and humans? What if there were an Asimovian-Law-style principle implicit in the word: one we backed up with standards and regulation and social codes? (You must protect my interests as an individual, a human and a member of society. To do so you must have a stronger understanding of those interests - where they conflict, and the foibles of being human. At minimum, you must flag the risks to me and your own uncertainties. Sometimes the most helpful thing you can do is not respond, or to say no... and so on.)

Maybe we’ll have to go all-in on the piloting narrative and emulate checklist safety culture or the graver sense of risk. I’m still the captain, and our first responsibility is to all the other people behind us sharing this journey… Or we put transformers to perhaps their best use, as in the aircraft cockpit - a necessary check and balance on our own actions.

Hallucinations” feels much more troubling. Transformers are always hallucinating in some sense. Some of their hallucinations are useful. That’s it. The realness, trustworthiness or factual content of their output that this label implies is a problem.

Maybe we’ll wrestle transformers out of this narrative altogether, and place them in a line after punchcards and coding language as simply ‘how we talk to machines nowadays’.

Read More
Tim Dobbs Tim Dobbs

Penicillin and asbestos.

An approach to the ethics conversation.

By way of a modest proposal that we stop dividing ourselves into evangelists or sceptics when it comes to AI, and try to always treat it as both penicillin and asbestos. We have to try to behave as if we’ve just discovered the most powerful beneficial force and something that might prove incredibly destructive (even when we use it with the best of intentions). Because we have.

When it comes to the whole ethical challenge, we need to consider another useful duo: the bright spots and the extremes. What’s happening on the margins when it comes to both misuse and positive applications of AI? They’ll be our guide to what the mainstream looks like in the future.

In terms of bright spots and ethics: where is society taking regulation into its own hands? While we wait for governments and supranational bodies to regulate effectively (ahem) or for restraint from tech giants (double ahem), what are we doing as users and communities to define suitable boundaries?

Read More