A task common to many machine learning workflows is to compare the performance of several models with respect to some metric such as accuracy or area under the ROC curve. Standard practice is to try out several different algorithms on a training data set and see which works better. Unfortunately, all to often, after this work has been done, model selection comes down to “eyeballing” several different ROC curves. If you find eyeballing a little too informal, then take a look at the tidyposterior package (part of the universe of ‘tidymodels`).
Home | About | Contributors |