The AI community uses the term AI Safety. Russell et al (2015) explain safety by complying with the terms verification, validity, security and control. Besides safety, the terms trustworthiness and confidence need to be considered. In the following, seven main challenges in the context of beneficial AI are described (Berlinic, 2019):
- Fairness: AI safety asks: how do we build AI that is unbiased and does not systematically discriminate against underprivileged groups?
- Transparency: AI safety asks: how do we build AI that can explain its decisions? How do we build AI that can explain why it made the wrong decision?
- Misuse: AI safety asks: how do we ensure that AI is only used for good causes?
- Security: AI safety asks: how do we prevent malicious actors from abusing imperfect AI systems?
- Policy: AI safety asks: how do we ensure that AI benefits all, not only a few? How do we handle the disruptions that will be caused by its development?
- Ethics: AI safety asks: how do we decide the values that AI promotes?
- Control/alignment: AI safety asks: how do we align AI with our values so that it does what we intend, not what we ask?
From ‘A Review on AI Safety in Highly Automated Driving’, Frontiers
Related posts:
F1: Max Verstappen Calls for Sharper Work, Faster Car
Sam Altman OpenAi: Questions for OpenAI’s Sam Altman
Verstappen Crashes As Mercedes Dominate First Practice At Monza
Azerbaijan GP 2024: Max Verstappen, Red Bull Racing Seek to Bounce Back in Baku
Reliance Jio says no preferential access to Facebook, Whatsapp
Opinion | The Supreme Court Puts the Pro-Life Movement to the Test
Opinion | The Responsibility of Republican Voters
Opinion | Looking for Better Ways of Running Elections
Corona Vaccine should be feasible against virus, one year target looks reasonable
F1 Announces New Initiative to Tackle Racism and Inequality