Trusting machines with robust, unbiased and reproducible AI
GOTO Copenhagen 2019

Wednesday Nov 20
14:30 –
Aud 10

Trusting machines with robust, unbiased and reproducible AI

We see more and more stories in the news about machine learning algorithms causing real-world harm. People's lives and livelihood are affected by the decisions made by machines. Human trust in technology is based on our understanding of how it works and our assessment of its safety and reliability.

To trust a decision made by an algorithm, we need to know that it is reliable and fair, that it can be accounted for, and that it will cause no harm. We need assurance that it cannot be tampered with and that the system itself is secure. We need to understand the rationale behind the algorithmic assessment, recommendation or outcome, and be able to interact with it, probe it – even ask questions. And we need assurance that the values and norms of our societies are also reflected in those outcomes.

What will the audience learn from this talk? Learn about how bias can take root in machine learning algorithms and ways to overcome it. From the power of open source, to tools built to detect and remove bias in machine learning models, there is a vibrant ecosystem of contributors who are working to build a digital future that is inclusive and fair. Learn how to achieve AI fairness, robustness, explainability and accountability. You can become part of the solution.

Does it feature code examples and/or live coding? No live coding

Prerequisite attendee experience level: Level 100