Data Science

Introducing the Cassandra Model

I have always been particularly impressed with the forecasting performance of the Prophet model. It has only a few parts: seasonality, holidays, any external regressors, and with those it has some of the most consistent forecasting performance around. Also, being a linear model (with Bayesian parameter estimation), Prophet offers a decomposable explainability/interpretability, i.e. Monday has …

Introducing the Cassandra Model Read More »

The simple elegance of `.get_new_params()`

Reading through documentation for a programming function to find what arguments it likes can be tedious, especially since many of the possible options either don’t work together or aren’t documented. Something I have been including on most of my newer classes is a static method called get_new_params(method=’random’) which does pretty much exactly as it says, …

The simple elegance of `.get_new_params()` Read More »

Comparative runtimes of GPU vs CPU showing CPU competitive with GPU in most cases

PyTorch, Tensorflow, and MXNet on GPU in the same environment and GPU vs CPU performance

It has been impossible in the past to get all three of the largest neural network architectures running in the same Python environment in such a way that they don’t conflict and so that they will also train on GPU. The reason I want all the libraries running in the same environment is that AutoTS …

PyTorch, Tensorflow, and MXNet on GPU in the same environment and GPU vs CPU performance Read More »

All the Problems with the M6 Competition (and a few good things)

I was so excited to see the announcement of the M6 forecasting competition. For those who don’t know the M series of competitions (not to be confused with BMW cars) is a long running series of competitions for forecasting time series. It comes from the 1980’s, well predating the modern data science explosion. And this …

All the Problems with the M6 Competition (and a few good things) Read More »