Captains and co-pilots

Words matter and I wanted to pause a minute to observe what we might have lost already in the narrative framing around AI (by which I mean transforming assistants in particular). Let’s start with that term: assistant.

There’s an implied desire for convenience on our part and an implied subservience on its. None of those things might pertain over time, so of course it’s distorting (and, yes, necessarily imperfect like any piece of language).

But - as with a co-pilot - might there be potential in these ideas to actually drive real safety for users and humans? What if there were an Asimovian-Law-style principle implicit in the word: one we backed up with standards and regulation and social codes? (You must protect my interests as an individual, a human and a member of society. To do so you must have a stronger understanding of those interests - where they conflict, and the foibles of being human. At minimum, you must flag the risks to me and your own uncertainties. Sometimes the most helpful thing you can do is not respond, or to say no... and so on.)

Maybe we’ll have to go all-in on the piloting narrative and emulate checklist safety culture or the graver sense of risk. I’m still the captain, and our first responsibility is to all the other people behind us sharing this journey… Or we put transformers to perhaps their best use, as in the aircraft cockpit - a necessary check and balance on our own actions.

Hallucinations” feels much more troubling. Transformers are always hallucinating in some sense. Some of their hallucinations are useful. That’s it. The realness, trustworthiness or factual content of their output that this label implies is a problem.

Maybe we’ll wrestle transformers out of this narrative altogether, and place them in a line after punchcards and coding language as simply ‘how we talk to machines nowadays’.

Previous
Previous

Rocket Science?

Next
Next

Penicillin and asbestos.