The Mathematics Of Murder

scientist1Should a robot sacrifice your life to save two?

It happens quickly—more quickly than you, being human, can fully process.

A front tire blows, and your autonomous SUV swerves. But rather than veering left, into the opposing lane of traffic, the robotic vehicle steers right. Brakes engage, the system tries to correct itself, but there’s too much momentum. Like a cornball stunt in a bad action movie, you are over the cliff, in free fall.

Your robot, the one you paid good money for, has chosen to kill you. Better that, its collision-response algorithms decided, than a high-speed, head-on collision with a smaller, non-robotic compact. There were two people in that car, to your one. The math couldn’t be simpler.” (PopSci)

This is the beginning of a really interesting article over at Zero Moment, a blog on PopSci, by Erik Sofge. With rapid advances in robotics, and autonomous cars becoming a reality, it’s only a matter of when, not if, we’ll be driven around by robots. Isaac Asimov famously created the three laws of robotics, generally meaning that robots should not harm humans. But if a car crash is unavoidable, who should the car crash into? Should it save your life, seeing as your bought the car, or should it save the life of two people in another car (as, after all, two is greater than one)?

These ethical conundrums are incredibly difficult and complex to solve, even more so when robots are potentially pre-planned with what decision to make. They are like the Trolley Problem, where you are in the situation of having to choose to save five lives at the expense of one. Would you pull a lever that saves five people on the trolley, but in doing so kills one other oblivious person in the way? What about if you could save the five people by throwing one person in the way of the trolley to stop it, but in the process killing that person?

Back to the robots; if they’re pre-planned, this throws up legal issues as well, with the possibility of the victim’s family suing the makers of the robotic car. Should the car then try to crash into another car driven by an old person? A young family?

Should the robotic car always make a choice for the greater good, even if that means killing you? Welcome to the future. Onwards!

This entry was posted in Ramblings, Science and tagged , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s