Addressing Algorithmic Bias
Algorithmic bias can be described as systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm. Machine learning or AI develops the same biases as humans when it comes to collecting, categorizing, producing, and interpreting data. The issue arises for a number of reasons, but the most prolific reason stems from the initial design and programming of the algorithm; the unintended or unanticipated use or the decisions relating to the way data is coded, collected, selected or used to train the algorithm which leads to poorly calibrated models that only produce biased results.
AI algorithmic bias is everywhere, according to the Center for Applied AI at Chicago Booth in their recently released playbook. Machine Learning, AI and Data Driven Decision Making is spreading ever deeper into all kinds of operations, influencing life-critical decisions such as who gets a job, who gets a loan and what kind of medical treatment a patient receives. This makes the potential risk of algorithmic bias even more significant.
This talk focuses on strategies for Addressing, Avoiding and Mitigating AI Algorithmic Bias.