AI Systematic Errors - Who Is Responsible

08 Jul 2021
Patrick Henz, Head of Governance, Risk & Compliance US, Regional Compliance Officer Americas
Video Length / Slide Count:
Time: 00:54:32
In contrast to humans, all decisions made by algorithms are systematic and based on user data, sensors, and algorithms. The question is, who is responsible for errors: the provider, the integrator, or the end-user? In most scenarios, responsibility is distributed amongst all stakeholders. Full autonomic systems (including self-driving technology) are not accepted by lawmakers, and a human supervisor is demanded as a backup.

Thanks to behavioral science, research knows that humans can be influenced and manipulated by Artificial Intelligence. This is comparable to the animal kingdom in which an alpha rules the pack, but the beta is able to manipulate the alpha.

Accountability from all sides (including creators, integrators, and users) is required to reduce systematic errors and the related effects on humans, such as falling to biases (like the over-trust of Artificial Intelligence and status pressure), leading to mission and commission error.

System-thinking, as described by W. Edwards Deming in his "System of Profound Knowledge," can exemplify the direction -- AI decision-making must be transparent, audited, and understood by humans. Even more, humans need to be aware that responsibility stays with them. This means they not only have to accept accountability for decisions but must also directly create AI-based on Deming’s philosophy. For everything else, Deming concluded that "a bad system will beat a good person every time." An increasing number of government officials are demanding the inclusion of behavioral science into acceptable corporate behavior. This can result in a greater moral responsibility amongst creators and providers of AI for their technology, in addition to a legal and sanctionable influence.