Rocket Science?

Got to love your local libraries. I stumbled on a digital version of This is The Behavioral Scientist (2018), which includes an interesting article by Jason Collins about the balance of power when humans and machines make big decisions together. His focus is on the early US space programme (Project Mercury), as a jumping-off point for a wider debate around trust and automation.

On the one hand, as Collins points out, automation takes a little time to become accepted: now we operate our own lifts (although we prefer pressing the buttons ourselves); we scan our own groceries too (which felt very alien at first). And, just like those early test pilots, we often need the illusion of control (a need met in other contexts - as Rory Sutherland would note - by ‘placebo’ interactions, or the need to keep travellers who are waiting for their luggage at an airport walking somewhere - anywhere - to reduce frustration).

Given enough time and smart design, where are the limits to that trust? David Rock’s SCARF model isn’t a bad place to start. It’s clear how Status (test pilot ego in this case) caused a few near-disasters for Project Mercury. You can see how Fairness scuppered (rightly so) the attempt to mark Covid-era students algorithmically.

But perhaps the secret weapon of this new era of automation lies in Relatedness: the fact that we will be able to part-build, customise, and establish some kind of relationship with (apparently) intelligent machines. What happens when we trust what we built too much, because it’s ours?

Jason’s blog is here and - if it’s not available in your library - you can track down copies of This is the Behavioral Scientist on their website. Post image is a quick nod to that Right Stuff still via Midjourney.

Previous
Previous

Reminder: the progress myth

Next
Next

Captains and co-pilots