Imagine you are 50 years in the future. You come to a car-sales event to purchase your very first self-driving autopilot car.
These cars will revolutionise road safety, having far better reaction speeds and split-second decision-making capabilities than a human driver.
However, the sales agent explained to you that in the extremely rare event of an unavoidable accident, it will sacrifice you as the occupant if it can save more people and minimise life casualties as a result.
Say if, one day, while you are driving along, an unfortunate set of events causes the car to head towards a crowd of 10 people crossing the road. It cannot stop in time, so it will avoid killing 10 people by steering into a wall, killing you in the process.
You’re like, Fuck that, I’m out. And you leave the event without purchasing the car.
You see, this is the ethical dilemma of the self-driving car.
Researchers have done polls on large groups of people on different ethical scenarios when it comes to self-driving cars. The results are interesting, if predictable. In general, people are comfortable with the idea that self-driving vehicles should be programmed to minimize the death toll.
Yet, they actually wished others to cruise in utilitarian autonomous vehicles, more than they wanted to buy utilitarian autonomous vehicles themselves.
People are in favor of cars that sacrifice the occupant to save other lives—as long they don’t have to drive one themselves.
As a result, by using an AI that prioritises minimisation of road casualties, the opposite may occur as less people will buy these self-driving cars, leading to more road casualties due to human driving.
In fact, it is not far-fetched to suggest that if the AI cars are instead programmed to selfishly protect its owner at all costs, more lives will be saved as people are more willing to buy these cars! Quite a counter intuitive idea.
Here’s another similar case.
We all know about animal rights activists and their fight against animal testing and experiments right?
Their arguments make sense on the surface:
Animals cannot give consent. Humans can. So long as informed consenting adults take part in these experiments instead, it’s a win-win situation. There are already human volunteers for clinical trials of drugs, so why not use humans across the board?
Now, let us look at the actual steps of new drug development:
See the pre-clinical trials? That’s when we test the drug on animals in-vivo or in-vitro to evaluate its safety, toxicity and efficacy. As you can see, a good 60–70% of newly discovered medical drugs get eliminated because they’re deemed unsafe. Human trials only come after that.
The very same people who fight against animal experiments are not willing to be the first-line guinea pigs to replace these animals, just like the case for self-driving utilitarian cars.
Now obviously, unethical practices and unnecessary cruelty to animal test-subjects should be punished. That’s common sense. I don’t even know why I felt the need to insert this disclaimer.
But I can say, without a doubt, that if all animal testing is banned, there will not be enough human volunteers to achieve anywhere close to an ample sample size required for accurate research.
As a result, more drugs out in the market would be unsafe, more harm would be done instead of good. And we’re talking about harm affecting literally millions of people around the globe.
So what’s the moral of the story?
Ethics and morality are not straight-forward. The biggest ‘trap’ people fall into, is thinking that they are good people, simply because they want to do good.
Sorry to burst your bubble, but having good intentions does not mean you are actually doing good. Especially if those good intentions are bred from emotional reasoning. You lose sight of the big picture, and get tunnel-vision, as if looking out from the bottom of a well.
Sadly, not everyone understands this.
For a society to thrive, these people should not be allowed important positions, where they can make decisions for people.
Source – Quora
Everything is awesome.