Tim Dobbs Tim Dobbs

Reminder: the progress myth

Always good to remind ourselves not to slip into the casual view of evolution and history that assumes the four-steps-forward-two-back march towards something better. Technology and medicine improve; but we’re still likely to use new knowledge to bomb the hospitals we build. There’s nothing inherent in technology or time that promises progress: it’s only possible if we make it - and if we keep it.

Read More
Tim Dobbs Tim Dobbs

Rocket Science?

Right stuff, wrong move. On trust and machines

Got to love your local libraries. I stumbled on a digital version of This is The Behavioral Scientist (2018), which includes an interesting article by Jason Collins about the balance of power when humans and machines make big decisions together. His focus is on the early US space programme (Project Mercury), as a jumping-off point for a wider debate around trust and automation.

On the one hand, as Collins points out, automation takes a little time to become accepted: now we operate our own lifts (although we prefer pressing the buttons ourselves); we scan our own groceries too (which felt very alien at first). And, just like those early test pilots, we often need the illusion of control (a need met in other contexts - as Rory Sutherland would note - by ‘placebo’ interactions, or the need to keep travellers who are waiting for their luggage at an airport walking somewhere - anywhere - to reduce frustration).

Given enough time and smart design, where are the limits to that trust? David Rock’s SCARF model isn’t a bad place to start. It’s clear how Status (test pilot ego in this case) caused a few near-disasters for Project Mercury. You can see how Fairness scuppered (rightly so) the attempt to mark Covid-era students algorithmically.

But perhaps the secret weapon of this new era of automation lies in Relatedness: the fact that we will be able to part-build, customise, and establish some kind of relationship with (apparently) intelligent machines. What happens when we trust what we built too much, because it’s ours?

Jason’s blog is here and - if it’s not available in your library - you can track down copies of This is the Behavioral Scientist on their website. Post image is a quick nod to that Right Stuff still via Midjourney.

Read More
Tim Dobbs Tim Dobbs

Captains and co-pilots

Words matter and I wanted to pause a minute to observe what we might have lost already in the narrative framing around AI (by which I mean transforming assistants in particular). Let’s start with that term: assistant.

There’s an implied desire for convenience on our part and an implied subservience on its. None of those things might pertain over time, so of course it’s distorting (and, yes, necessarily imperfect like any piece of language).

But - as with a co-pilot - might there be potential in these ideas to actually drive real safety for users and humans? What if there were an Asimovian-Law-style principle implicit in the word: one we backed up with standards and regulation and social codes? (You must protect my interests as an individual, a human and a member of society. To do so you must have a stronger understanding of those interests - where they conflict, and the foibles of being human. At minimum, you must flag the risks to me and your own uncertainties. Sometimes the most helpful thing you can do is not respond, or to say no... and so on.)

Maybe we’ll have to go all-in on the piloting narrative and emulate checklist safety culture or the graver sense of risk. I’m still the captain, and our first responsibility is to all the other people behind us sharing this journey… Or we put transformers to perhaps their best use, as in the aircraft cockpit - a necessary check and balance on our own actions.

Hallucinations” feels much more troubling. Transformers are always hallucinating in some sense. Some of their hallucinations are useful. That’s it. The realness, trustworthiness or factual content of their output that this label implies is a problem.

Maybe we’ll wrestle transformers out of this narrative altogether, and place them in a line after punchcards and coding language as simply ‘how we talk to machines nowadays’.

Read More
Tim Dobbs Tim Dobbs

Penicillin and asbestos.

An approach to the ethics conversation.

By way of a modest proposal that we stop dividing ourselves into evangelists or sceptics when it comes to AI, and try to always treat it as both penicillin and asbestos. We have to try to behave as if we’ve just discovered the most powerful beneficial force and something that might prove incredibly destructive (even when we use it with the best of intentions). Because we have.

When it comes to the whole ethical challenge, we need to consider another useful duo: the bright spots and the extremes. What’s happening on the margins when it comes to both misuse and positive applications of AI? They’ll be our guide to what the mainstream looks like in the future.

In terms of bright spots and ethics: where is society taking regulation into its own hands? While we wait for governments and supranational bodies to regulate effectively (ahem) or for restraint from tech giants (double ahem), what are we doing as users and communities to define suitable boundaries?

Read More
Tim Dobbs Tim Dobbs

Here comes the science-fiction part

The first of a lot of stuff about intelligent machines in science-fiction books and movies.

What sci-fi can tell us about the near-future is such a vast area and I can’t wait to dive in. For now, I just want to be a magpie and pick out a couple of shiny observations. The first is about monkeys and crows, the second is about religion.

The analogy that most readily comes to mind when considering machines and predictive text is the Infinite Monkey theory (popularised if not conceived, according to some not very scientific digging, by Emile Borel and Arthur Eddington - but ultimately owing most of its popularity I suspect to Douglas Adams then Brian Cox - not that one - et al). To recap: give enough monkeys enough time and typewriters and the works of Shakespeare must result at some point. It’s really a sort of joke about probability and infinity, but predictive generators throw the analogy into sharp relief. Remix the entire internet etc. enough times and you’ll get plausible answers to your questions (now) and something new and extraordinary (soon).

Happily, one of sci-fi author Adrian Tchaikovsky’s novels includes a far more potent metaphor for machine learning. Tchaikovsky imagines the supercharged evolution of a species of corvid - ultimately resulting in a kind of general intelligence (artificial - or actual intelligence? There’s the rub). He divides the business of processing into two distinct roles: memory and pattern recognition for one half of the collective brain; prediction and theorising for the other. When these combine - in a pair of birds like his characters Gothi and Gethli - you have interactions with a strong echo of ‘talking to’ an AI.

The mimicry of intelligence is uncanny; the question of true sentience fraught. But it’s useful, whenever we’re tempted to anthropomorphise, to think of a pair of chattering crows as often as we picture a human - or a roomful of monkeys. Helpful too to acknowledge all the human experience, and labour, that shaped those crows - whatever we make of its/their status as creatures.

And to round off, a second quick observation about sentience and limitation. We can get there by way of Tchaikovsky or China Mieville, since both authors feature human disciples of powerful AIs. So much doomsday speculation seems to bypass the role of people altogether, pitching all of humanity in a struggle against some rogue - and invariably sentient - machine. What seems rather more likely is that dependence on, and reverence for, the machines we create will recruit adherents and extremists - with no need, as an interim step, for sentience on the machine’s part, or losing the ultimate ‘physical’ ability to control them, on ours.

Read More