While Actable AI’s focus is on extracting actionable insights from data, we in no way care less about the quality of our AI models. In this blog post, we show a performance benchmark of our predictive models with DataRobot Autopilot, one of the most popular DSML vendors in the market.
In this benchmark, we select randomly 10 popular data sets from Kaggle and other sources. The selected data sets vary in both number of rows and columns. Columns can be either numbers, categories or text. We then ran regression models for half of the data sets and classification for the other half. We use almost all the available features except features that leak predicted targets. In Actable AI, for each of the model, we set the options to optimize for quality, training time limit of 2 hours with a 10-fold cross validation. In DataRobot, we use the same set of features and ran with Autopilot mode.
For regression, we use R2, RMSE and MAE for benchmarking. The results for each of the data set are listed below:
For classification, we use AUC for binary classification and Accuracy/Balanced Accuracy for multiclass classification. The results for 5 data sets are reported below:
The results show Actable AI’s models perform better than DataRobot Autopilot (statistically significant) in most cases except with IBM Telco Churn data set. We suspect that DataRobot Autopilot has a smart way to transform geospatial columns into useful features in this case.With Actable AI, it is possible to increase training time limit to achieve better results. DataRobot also has a Comprehensive mode.
We will report another benchmark for our Actable AI models without training time limit and DataRobot Comprehensive mode in another blog. This benchmark is designed to give a rough comparison between our AutoML and DataRobot Autopilot mode. It is by no meant an exhaustive comparison and the results might be different depending on the nature of the input data.