The goal of ML monitoring is to alert you of a performance drop in your models and detect significant changes in the system. Most of the current ML monitoring algorithms focus on the latter.
Data and concept drift detection techniques can reliably detect changes in model input data. This, however, does not mean that the performance of the model has degraded, as some, or even most data drift can be benign. This means that data drift detection is not enough to ensure that your models perform well.
We also need to know the performance of the model at all times. This performance cannot be simply measured in most cases, as the ground truth is either delayed or not available at all.
At this meetup, the co-founder of Nanny ML, Wojtek Kuberski will talk about how to use data drift detection algorithms and uncertainty estimation techniques to predict the performance of ML models, even if ground-level truth data is not available.
🗓️ When: February 24
⏰ At: 19:00 (EET)
👩🏼💻 Moderator: Nika Tamayo Flores
🇺🇸 Language: English
🌐 How: Free Online
⬇️ Limited spots! Register below if you’re interested 😎