machine learning disasters

I am writing a research paper and I am looking for reliable sources that provide information on disasters of machine learning. Especially in the field of autonomous driving. Have there been any incidents when something went really wrong? Any links to articles or research papers would be helpful.

Topic self-driving overfitting machine-learning

Category Data Science


I don't know, there might be disasters which happened due to ML but my guess is that it might be hard to really consider ML as the cause: ML is always deployed in a setting designed by humans for a very specific task, and the design must take into account the statistical nature of ML. So I would tend to interpret most problems as caused by a wrong design, not by ML itself.

If you are looking more generally into problems in the applications of ML, as far as I know the most common problems cited are:

  • Increasing existing bias, for instance face recognition being is more accurate with white than black faces.
  • Lack of transparency, especially in critical applications like medicine, military, law enforcement
  • Lack of a legal responsibility framework, e.g. who is at fault if a self-driving car causes an accident?

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.