Original Source Here
6 takeaways from the MLOps session by Deeplearning.ai
Here are 6 of the many points that I found interesting:
1. Andrew Ng shares an anecdote about how he was once heading a team working on a speech recognition problem which built a system with an amazing performance on the test set — even better than human level! But the business team says it sucks, lots of errors. And this resulted in sort of a deadlock. Andrew’s excited about using MLOps to solve these kind of problems.
2. Chip’s found that different companies have different requirements based on the size of the company, the maturity of their systems etc. So there is no one right tool for all purposes and it comes down to choosing the right tool for the task.
3. Rajat says that from an ML Engineer perspective, it’s important to make sure that the system works end-to-end. And end isn’t just the test set. Make sure it’s having a real world impact and then iterate and improve your system.
4. Laurence says that we’re at MLOps 1.0. Many of the things and concepts we’re learning now might be discarded in the next 15–18 months, but we have to go through the journey to reach to versions 2.0, 3.0 etc.
5. When asked about one principle that will not change as tools change, Andrew says that he makes sure his teams ensure consistently high quality data throughout the ML project lifecycle.
6. Robert says that the fundamental thing for him is the data itself — how much predictive signal is there in the data. Focus on the data and focus on the lifecycle is always going to be important even if things change around you.
Have you watched it yet? What did you find interesting?
Please do let me know in the comments below or reach out to me on my LinkedIn —
And while you’re at it, do check my Youtube out too. I have some interesting videos — https://www.youtube.com/channel/UC6VPXglDoZYMOj2kr-flNJQ
Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot