Artificial Intelligence: Only safety can drive Artificial Intelligence


The AI community uses the term AI Safety. Russell et al (2015) explain safety by complying with the terms verification, validity, security and control. Besides safety, the terms trustworthiness and confidence need to be considered. In the following, seven main challenges in the context of beneficial AI are described (Berlinic, 2019):

  • Fairness: AI safety asks: how do we build AI that is unbiased and does not systematically discriminate against underprivileged groups?

  • Transparency: AI safety asks: how do we build AI that can explain its decisions? How do we build AI that can explain why it made the wrong decision?

  • Misuse: AI safety asks: how do we ensure that AI is only used for good causes?

  • Security: AI safety asks: how do we prevent malicious actors from abusing imperfect AI systems?

  • Policy: AI safety asks: how do we ensure that AI benefits all, not only a few? How do we handle the disruptions that will be caused by its development?

  • Ethics: AI safety asks: how do we decide the values that AI promotes?

  • Control/alignment: AI safety asks: how do we align AI with our values so that it does what we intend, not what we ask?

From ‘A Review on AI Safety in Highly Automated Driving’, Frontiers



Source link