AutoML - Experiment (dataset) and Deployment update process (life-cycle)
Possibility to update the dataset which is used in the ML experiment would be great!
NEXT, possibility to update (exchange) ML deployment by new version of the model from ML experiment would be awesome!
I have training data that is updated every day and (possibly) new rows are added. What I would like to achieve is to use the same ML experiment that is already setup, and just see that the training dataset was updated, run the new version and deploy a better model.
But, when I look at the experiment's dataset overview, I see that it didn't update and only initial values are taken into training.
So, I have to create a new experiment for each new training dataset. This is a bit cumbersum.
Next level is to be able to also exchange the model in already created ML deployment with the new model. This will make it so much easier for app developers who use ML predictions. Currently they need to set up a new Data connection because there is always a new deployment.
I think that both update possibilities will make it much easier to handle data science project life-cycle. Could also increase adoption of AutoML.
Thank you for your feedback. I can confirm that all of the items on your list are on the roadmap as part of our continued improvements to the ML Ops lifecycle. We anticipate progressively delivering these over the short and medium term.
NOTE: Upon clicking this link 2 tabs may open - please feel free to close the one with a login page. If you only see 1 tab with the login page, please try clicking this link first: Authenticate me! then try the link above again. Ensure pop-up blocker is off.