Have you ever played the game “Would You Rather?” It asks you to pick between two (potentially) thought-provoking choices. Would you rather lose your sense of sight or your sense of touch? Would you go a month without the internet or a month without your phone? Would you rather run your car into a group of convicted criminals legally crossing the street or a group of senior citizens jaywalking?

This last question is one being posed by the Moral Machine, a platform created for the MIT Media Lab for gathering a human perspective on moral decisions made by machine intelligence. Participants are presented scenarios in which a self-driving car experiences sudden brake failure and has two options: crash into what’s ahead or swerve into what’s in the other lane. (Try it yourself)

It’s a variation of the classic moral dilemma called the Trolley Problem, which asks participants to choose between sending a trolley down a track to kill one group of people or another. The scenarios can get wildly complex (or in some cases silly—would you rather a baby or a cat drive the death mobile?), but that’s the point. If humans have trouble making these decisions, how can we expect a self-driving car to do so? It might seem like nothing more than an interesting thought experiment today, but what about when millions of people are using self-driving cars on a daily basis? Is there a right way to program for death?

As self-driving cars become more widely used, the hope is that they will lead to significantly fewer motor-vehicle deaths overall. Tesla’s self-driving cars were able to 130 million miles before suffering their first fatality. Compare that to the one fatality that occurs every 94 million miles in the U.S. and every 60 million miles worldwide and it’s easy to see the upside of relinquishing control to an intelligent machine. The question that remains, however, is itself a bit of a trolley problem: Would you rather have fewer deaths at the hands of machine intelligence or more with human beings behind the wheel?