I am currently working on adding Parquet support to openml-python.
Parquet files will load a lot faster than arff files, so I was wondering if we should still aim to also provide the speed improvement through keeping the pickle files. I did a small speed test to record speed loading the cc-18 from disk, the suite has 72 datasets varying from a few kb to 190 mb in size (parquet/pickle size).
Loading all datasets in the suite takes ~6 seconds for parquet, ~700ms for pickle and ~7min for arff.
We see that loading the difference from pickle and parquet is still big relatively speaking, but in absolute numbers parquet still loads the entire suite of 72 datasets in a ~6 seconds. I think it's worth evaluating whether or not we want to keep pickle files.
The obvious the drawback is slower loads, though the difference might not be noticeable in most cases.
Getting rid of the pickle files would have the following benefits:
@mfeurer
I am currently working on adding Parquet support to
openml-python.Parquet files will load a lot faster than arff files, so I was wondering if we should still aim to also provide the speed improvement through keeping the pickle files. I did a small speed test to record speed loading the cc-18 from disk, the suite has 72 datasets varying from a few kb to 190 mb in size (parquet/pickle size).
Loading all datasets in the suite takes ~6 seconds for parquet, ~700ms for pickle and ~7min for arff.
We see that loading the difference from pickle and parquet is still big relatively speaking, but in absolute numbers parquet still loads the entire suite of 72 datasets in a ~6 seconds. I think it's worth evaluating whether or not we want to keep pickle files.
The obvious the drawback is slower loads, though the difference might not be noticeable in most cases.
Getting rid of the pickle files would have the following benefits:
@mfeurer