Training & Monitoring AI
This video is also available in the GOTO Play video app! Download it to enjoy offline access to our conference videos while on the move.
Do our deployed AIs work as expected?
Before we deploy AI models, we extensively validate them to make sure they perform well. But how confident are we that we get comparable performance after deployment? How can we check whether the assurances we took from our validation are still applicable?
To answer these questions and sleep well again, drift detection is a key aspect of monitoring deployed models. It checks whether the information flowing through our models - either probed at the input, output or somewhere in-between - is still consistent with the one it was trained and evaluated on.
Thomas shows things that can go wrong when we neglect drift detection, looks at how drift detection works, and then demonstrates how effective drift detection can be implemented in practice. He draws on his experience with implementing the open-source TorchDrift.org, the turn-key solution DriftDash.de, as well as consulting with clients to implement drift detection in their processes.