In my last two segments, I explored the potential for our growing dependency on artificial intelligence to disrupt the role of interpersonal relationships in fostering personal accountability (Disappointment.AI) and to displace the contribution of small disciplines to the development of critical thinking (Hallucination.AI). In this third segment, I would like to explore how the abandonment of systems that reflect complicated societal tradeoffs could fundamentally weaken the means by which we relate to each other.
My chosen vehicle for doing so is the embattled field of autonomous driving, which has been struggling to get out of the starting blocks. A colleague recently made an astute observation about the path to adoption for self-driving cars. He argued that society will expect a much greater reduction in fatalities from autonomous driving, far below the running average from the analog system we have today, before it would accept such a wholesale change - the marginal benefit must first far exceed the marginal cost. Is this true? After all 63% of U.S. adults do not want to ride in a driverless car. But what could account for this insistence on such a high hurdle for change?
My working thesis is that we intuitively understand that a shift to mandatory fully autonomous driving could irreversibly fray the relational ligature that holds our society together. Let me explain. Today, when we give a teenager car keys and a driver’s license, we are essentially entrusting them with sacred responsibility for human life, whether their own, their passengers’, or those of other road users and pedestrians. Isn’t this a terrifying prospect? Yet we do it millions of times every single day - we, and they, get on the road and interact with dozens, perhaps hundreds, of complete strangers entrusted to act responsibly with their wheeled weapons of destruction. We do this even though some of those strangers may be intoxicated beyond cognition, and despite understanding that unconscious, irrational impulses readily torpedo to the surface as we reflexively judge every driver whose definition of road courtesy fails to comport with ours: slow drivers, tail-gaters, lane-cutters, turn-signal amnesiacs, you name it. Does this not strike you as slightly mad? How could such a system come to be?
The answer is artifactual intelligence. We have somehow, without computers, fabricated a rule system that – despite its bewildering complexity – provides the education, incentives, and disincentives to curb our worst instincts. The rules of property law, tort law (which governs when I negligently drive my car), insurance law, traffic law and criminal law come together to establish rules of engagement for this uncoordinated exercise of point-to-point human transportation. Society has collectively negotiated prescriptions to guide our every exercise of life-preservation responsibilities – whenever you turn that ignition key, you subordinate destiny to these rules, which dictate whether you pay a fine, lose a license, pay a court judgment, or go to jail.
Importantly, all of these interlocking rules are built on a set of relational assumptions. One of the most fundamental of these is that if you do a bad thing, you are responsible, to another human, or to society, for the bad thing you have done. This relational premise is rooted deep within all of us. We are at our core a relational species. We expect others to have certain obligations to us. We expect to have obligations to them. Laws are civilization’s expression of these obligations, and where laws are silent, expectations of morality, etiquette, protocol, custom, and habit weave our lives together into a thrumming hive. This intrinsic relational instinct is the source of our outrage at injustice (a reaction to one human mistreating another in a way outside the relational compact) and of our admiration for heroism (a desire to preserve relational ties of others at the expense of our own interests, and often as an expression of our own relational commitment within the community). Relational instinct is a defining characteristic of our humanity.
Our essentially inter-relational nature is, to me, a key reason why we resist fully mandatory autonomous driving:
We understand, even without having it articulated to us, that a world in which such an arrangement displaces the messy, chaotic artifactual system we have today is one in which when something goes wrong, allocation of blame will not depend on human judgment, but will fall to the cold logic of a computational algorithm.
When the AI decides that the autonomous vehicle your child is driving should stay the course and slam into the rear end of a truck that has slipped on black ice, rather than swerve and endanger the lives of the young family driving in the next lane, justice may be swift - but it may not be very satisfactory.
A responsibility allocation matrix designed by math geniuses will categorize the incident according to the ruleset that deterministically selected human A to live and human B to die - and based on that categorization, a check will arrive in the mail, with the monetary amount for the loss of your child determined by a payout table approved by the administrators of the vehicle autonomy board. If you depend on Medicare, rely on the VA, receive social security, or have visited the DMV - or if you have helped someone who does - you know I am not far from the truth.
If this gives you pause, let me offer three explanations as to why.
Catharsis Externality. In the clunky artifactual system based on laws and incentives, I have someone to blame if things go wrong. The rules are set up for me to get mad at someone - for me to ask the power of the state to intervene on my behalf and hold them accountable. A jury of my peers will hear the evidence of the person who wronged me. They will judge them to be guilty and my pain, while not attenuated, will take one step towards closure. But in the elegant and efficient artificial system, where does this anger go? Can I direct it at the developers who designed the system and instructed it to prefer one life over another? Do I shout at the check that came in the mail? Capturing this “catharsis externality” may be one of the strongest rationales for requiring any such system to clear an abnormally high bar in terms of lives saved relative to the status quo.
Social Reinforcement. The clunky artifactual system is built on the communal repetition of well-understood, clearly-defined, statements of relational expectation. Drive safe. Stay sober. Designate a driver. Teach your kids. Take the keys from dad. You owe it to your fellow citizens to undertake these acts of civic responsibility and collective trust building. But what happens if we remove these signals? In our already fractured and divided society, what is the cost of removing an entire reasonably-well functioning system which has, as a meaningful byproduct, instruction of our youngest members on the virtues of accountability, selflessness, and self-control? What stands-in for the means that make civilization civil?
Trust and Agency.Would you get into the car not knowing if you are human A or human B?
I don’t think we fear self-driving cars in and of themselves. Rather, I think we instinctively resist the dehumanization that can accompany automation. Understanding, managing, and mitigating this glidepath is key to any successful of societal ligature to synthetic intelligence.