This release represents several months of fairly hard work for me. It even has a pretty, if simple documentation website that you can check out here. It features quite a number of bug fixes, including several rather critical ones that fixed cross validation not actually cross validating.
As always, my biggest focus was on data transformations. I added about 14 of those. I setup the structure so that now four transformers can be applied at once – two of them taking parameters (RollingMean
and SeasonalDifference
can now be custom tuned). I added a separate and more powerful linear detrend, and I expanded and improved outlier removal. Basically, the data transformations can now perform a decomposition forecast.
I also added seven entirely new models. These include the first custom use of neural nets, my personal invention, the slow but effective MotifSimulation
, and several other unconventional models. Some are somewhat experimental with some of their parameter inputs.
Probabilistic forecasts (upper and lower bounds) are significantly improved with the addition of scaled pinball loss
as an error metric. Additionally I replaced my placeholder method for adding data-based intervals to models that don’t have model-based uncertainty estimates. The new methods are far more reliable in producing realistic probabilistic intervals.
Perhaps one of the most accuracy-inducing improvements is the addition of horizontal
style ensembling. This is where ensembles can be selected that choose the best model type for each of the many series. As a result of all the models they use, this method tends to be rather slow. Slow but extremely effective. Additionally ensembles can now be recursive, so an ensemble of an ensemble of an ensemble, and so on, is possible.
Overall I have added and validated all the necessary functionality, and more, to the point where this package feels production ready to me. The outstanding lack I see is in speed. Right now, I can comfortably run this on one hundred or so time series and have it converge fairly quickly, for the most part. But scale that up to 10,000 series (without subsetting) and things get unacceptably slow. No parallelization is currently used. Related to that is the fact that the RandomTransformer
and get_new_params()
aren’t properly tuned in the parameters they output. Some unacceptably slow, or just never-high-accuracy parameters are still being output, and they all require optimization.