WHO DUNNIT #6
Who is responsible?
If you’re driving a car and something suddenly happens, your reflexes kick in and you react automatically. You don’t have time to think and make a conscious decision. However, a self-driving car has to be trained with data that we input. The car decides based on what it’s learnt, so we have to think about what priorities the car should have.
What developments have we seen so far?
These are the decisions that people worldwide typically make in the game behind you:
Source: Awad et al. 2018
Do you agree with them?
More AI and whodunnit
We don’t always know why AI makes certain decisions. The AI companies know what data they have put into it, and we know what comes out of it. But exactly how the system comes up with this result is something we don’t always know. This is what we call the black box. The system isn’t transparent.
Suppose the car hits someone: who is responsible? The car manufacturer? The driver of the self-driving car? Or the people who trained the AI system?
AI judge
AI can also be very useful in the courts. Judges could call on AI for help with reducing the backlog of cases. It would then base its verdicts on previous legal cases. But what if an AI system finds someone guilty and you can’t really figure out why? Is that okay?
Killer drones
And if you think a self-driving car is scary, what about killer drones? These are drones that can carry out attacks independently, without any human control. In 2020, a drone of this kind is said to have tracked down and attacked people in Libya. How do you feel about that?
Positive note
António Guterres, the Secretary-General of the United Nations, thinks this is dangerous too. He believes that only people should decide on the use of nuclear weapons – not machines or AI.
The United Nations is a global organisation that promotes peace. Almost every country in the world is a member. Guterres is currently the leader of the United Nations.
Fortunately, there’s also something called explainable AI. This sets out the basis on which AI makes a certain decision or prediction.
There’s no black box in that case.
There are also companies working on ethical AI. Google, for example, has already drawn up rules that its AI products must follow. The European Union has also published guidelines for trustworthy AI.