AI Biases and Inclusion
Dr. Styliani Kleanthous, Senior Researcher at the Cyprus Center for Algorithmic Transparency, Open University of Cyprus and CYENS Centre of Excellence, Scientific Director at Media 2000
-
Technical Community
IEEE Members: $10.00
Non-members: $20.00Length: 00:59:12
Image analysis algorithms have become indispensable in the modern information ecosystem. Beyond their early use in restricted domains (e.g., military, medical), they are now widely used in consumer applications and social media, enabling functionality that users take for granted.
Recently, image analysis algorithms have become widely available as Cognitive Services. This practice is proving to be a boon to the development of applications where user modeling, personalization, and adaptation are required. However, while tagging APIs offers developers an inexpensive and convenient means to add functionality to their creations, most are opaque and proprietary, and there are numerous social and ethical issues surrounding their use in contexts where people can be harmed.
In this talk, Dr. Styliani Kleanthous discussed recent work in analyzing proprietary image tagging services (e.g., Clarifai, Google Vision, Amazon Rekognition) for their gender and racial biases when tagging images depicting people. She presented her techniques for discrimination discovery in this domain, as well as research and efforts to understand a user and the developers' perceptions of fairness. In addition, she explored the sources of such biases, by comparing human versus machine descriptions of the same people/images.
Recently, image analysis algorithms have become widely available as Cognitive Services. This practice is proving to be a boon to the development of applications where user modeling, personalization, and adaptation are required. However, while tagging APIs offers developers an inexpensive and convenient means to add functionality to their creations, most are opaque and proprietary, and there are numerous social and ethical issues surrounding their use in contexts where people can be harmed.
In this talk, Dr. Styliani Kleanthous discussed recent work in analyzing proprietary image tagging services (e.g., Clarifai, Google Vision, Amazon Rekognition) for their gender and racial biases when tagging images depicting people. She presented her techniques for discrimination discovery in this domain, as well as research and efforts to understand a user and the developers' perceptions of fairness. In addition, she explored the sources of such biases, by comparing human versus machine descriptions of the same people/images.