Choosing a Data Science CPU (Benchmarking on Intel)

I spend a lot of time while developing in data science waiting for my current sample of code to finish running. In particular, my package AutoTS tries hundreds or thousands of models, and even with small data that can take a while to run. Another big factor for me is the environmental cost of development – if I leave my workstation running all night at full power, it is churning through kilowatts of power. That much power, used routinely, starts to become significant. Finally, as a matter of cost, when spending hundreds or thousands of dollars on a new computer or server instance, I want to know what features of the CPU are actually worth paying for.

But in order to save you the trouble of reading this long article, here are a few things I found:

  • AVX-512 Instruction sets (“Deep Learning Boost”) allowed an ultrabook CPU to outperform a powerful desktop AVX-2 CPU
  • Intel MKL does offer significant performance boost over OpenBLAS – Intel will likely outperform a more-powerful AMD chip (in data science)
  • Having a large number of CPU cores is very helpful for some models, but is not as helpful as I was expecting for most models
  • CPU clock speed/frequency is quite significant and benefits all types of models – thus the Intel Xeon was slower despite its high core count.

My recommendation then, if you are buying a new CPU, is to keep an eye on the Instruction Set that exact model supports, and/or look for so-called “Deep Learning Boost” as it really does make a difference – as of writing in most newer Xeons and 11th Gen Core i5/i7/i9. As for configuring a cloud VM, take a look at the clock speeds offered by different instances as for most workloads the higher clock speed will be more noticeable than adding more cores. In general paying for more than 16 cores in a VM is not going to be worth the cost and energy consumption – unless you already know that you have a highly-parallelized big data workload.
I should note that I had to do a little work to get the full performance out of the newest CPUs in Anaconda. You can read about that here.

From a power-conscious viewpoint, the server CPUs are terribly inefficient – the environmentally minded may prefer to train models on a laptop when they can – which with high speed, high core count, AVX512 supporting laptop CPUs available, it is not as much of a limitation as it once was.

Intel vs AMD and ARM

If you listen to much of the news about CPUs, Intel is mere moments from death, hounded by AMD from one side and ARM from the other. I am personally excited to see this increased competition, and if I were a gamer would probably have already run off and bought myself a Ryzen Zen 3 CPU from AMD. It is hard to deny that AMD is offering more, fast cores for a cheaper price…

Yet I am not a gamer but a data scientist (most of the time). One of the ways Intel has responded to the increased competition, it seems, is by pushing their HPC (ie supercomputer math) expertise down into lower level consumer chips. That doesn’t help the gamer much, but it does help the data scientist. Here is why Intel still holds this one small market segment that we data scientist live in:

  • Intel’s MKL (Math Kernel Library) is well known as the fastest and most popular toolkit that empowers computations in MATLAB, R, Python, and so on – and it doesn’t really work on ARM or AMD as well.
  • Intel is releasing AVX-512 instructions in many of their newer consumer CPUs – bringing supercomputer instructions to the masses -something AMD has no stated intention to do. Deep Learning Boost, to my knowledge, is an section of AVX-512 instructions that are particularly useful for data science.
  • Intel is also releasing their own GPUs, and with the OpenVINO toolkit seem to be building an ecosystem that allows automatic use of of both CPU and GPU without somewhat troublesome mess that is switching between CPU and Nvidia’s CUDA right now. I am really excited to see this develop more.
  • Intel’s CPUs are still power and fast, if no longer in the completely dominating way they once were.

Benchmarking on Small Data

The initial benchmark looked at small data: 1028 rows with 9 series. It used a fixed selection of 618 models in AutoTS 0.2.7 alpha release. In this experiment, the average total runtime was just under 30 minutes for 618 models. The fastest CPU’s were over 40% faster than this, finishing in just over 15 minutes. Since this data has 9 series, it is an awkward number for an 8 core CPU, and likely doesn’t showcase the full parallel advantage over 4 cores for some models.

In general the benchmark timings below are shown as percentage relative to slowest running CPU. Environment installation on most computers was done within 24 hours with the same Anaconda + pip install instructions and accordingly should have nearly identical package versions. Laptops tend to be fickle between runs as they go on and off of turboboost – although with good cooling/airflow they can usually maintain their turboboost for an extended period. Controlling all variables is difficult – there is the ‘silicon lottery’ which refers to the fact that by chance some CPUs are slower or faster than others of the same model.

What is really interesting about this is that I had not yet fixed the LINPACK issue with the 1165G7 and 10700, which means their calculations should be much slower. The likely explanation for their high performance regardless is that these CPUs have the fastest memory, largest cache sizes, and generally fastest IO – and that the calculations themselves were not the bottleneck of these operations.

The fanless Pentium J5005 stands out for its energy efficiency. This is the main reason I have chosen it as a my personal server. However, it cannot support recent versions of Tensorflow or MXNet – I believe because it lacks AVX2 instructions.

Pentium J5005i5-8265Ui7-1165G7i7-7700HQi7-10700Xeon (Probably 8160)
ClassDesktopLaptopUltrabookGaming LaptopDesktopServer
Cores4C/4T4C/8T4C/8T4C/8T8C/16T24C/96T
Base Frequency GHz1.51.62.82.82.92
Max Boost GHz2.83.94.73.84.83.5
Lithography (nm)141410141414
Watts, Approx520204065150
Instruction SetSES4.2AVX2AVX512AVX2AVX2AVX512
Cache (MB)461261633
OtherFanlessCUDA-enabled GPUGPU, not usedCloud VM
OSUbuntu 20.04Windows 10 ProWindows 10 Windows 10Windows 10 ProUbuntu 18.04
RAM (GB)1216163232120
Model Failure29.3%27.6%22.2%29.3%22.3%28.0%
GluonTS Failure100%100%38%100%38%84%
Modelsmaller is better, includes only models which succeeded on all machines, scaled by slowestSlowest (s)Model CountParallelized
AverageValueNaive1.000.350.360.430.420.420.112450
DatepartRegression1.000.340.380.450.400.420.35940some
ETS1.000.590.360.510.430.360.79519all
FBProphet1.000.780.410.680.270.3816.05935all
GLM0.811.000.490.580.590.530.85848all
GLS1.000.330.360.420.440.400.297500
LastValueNaive1.000.380.340.480.410.490.152510
RollingRegression1.000.580.350.620.410.5834.13734some
SeasonalNaive1.000.330.330.390.420.390.336780
UnobservedComponents1.000.480.360.500.330.516.277540
VAR1.000.700.590.950.620.860.321380
VECM1.000.370.470.410.500.690.172650
WindowRegression1.000.560.310.550.320.4725.00621some
ZeroesNaive1.000.410.340.570.430.500.144400
Average0.990.520.390.540.430.50
Variability0.050.190.070.140.090.13
Total Runtime (seconds)2732.61666.9977.51632.8979.91376.3618
Watt Hours Used3.7959.2615.43118.14217.69257.347

One issue I have not really looked into is Ubuntu (Linux) vs Windows. It is my belief that Ubuntu should normally be a bit faster as software tends to be easier to optimize and parallelize on Linux.

Failure rates seems to be correlated with processor age. The base line failures occur because of model parameters not suiting the data – for example numerous ETS models failed here because a variation of them can only take positive data, and negative data is present here.

MXNet models are not shown above because too many of them failed. Some MXNet+GluonTS models only work on the Xeon and 7700HQ, and more only work on the 1165G7 and 10700, with no overlap between the models than can run on the two groups. My best guess is a new Xeon chip, circa 2020, running Linux would be able to run all of the models that could run here. With GluonTS Version 0.4.0 or so, I was able to run it on more computers, but the current iteration seems very picky – indeed it notably has started to fail on my CUDA-enabled GPU as well. Unsurprisingly, MXNet generally went fastest on faster CPUs with more cores.

The results between the only two directly comparable GluonTS results are as follows:
1165G7 124.04 seconds
10700 93.69 seconds
and
Xeon 25.53 seconds
7700HQ 32.27 seconds

Benchmarking on Larger Data

I tried a number of experiments trying the use of Environmental Variables to set MKL_NUM_THREADS and OP_NUM_THREADS. Interestingly, setting these values usually made the runtime just a bit slower. Whatever the defaults are, they work best.

This next dataset is much larger: 100 series of 1941 records.

Model8265U7700HQXeon10700_noblas10700_intelmkl1165G7_intelmkl1165G7_openblas1165G7_noblas
AverageValueNaive0.5705970.6753880.62375510.4595120.4114590.3614360.809619
DatepartRegression0.4322110.5765830.67446410.379340.343980.3269930.736249
GLM10.684710.693420.8198540.4024330.4122180.4189160.674156
GLS0.4107920.4482680.49287910.2871410.2485850.2552510.859893
LastValueNaive0.6558630.71573210.948120.4734350.4425010.4706260.715015
SeasonalNaive0.4636680.5337570.57458610.3464780.3057240.2922230.757318
VAR0.4363120.4899890.2507510.0917560.1017390.3039720.916992
VECM0.0995440.1196880.11793410.0654910.0663590.0993720.395243
ZeroesNaive0.4937980.6209330.72502710.3657190.3190940.4389860.859505
Average0.5069760.5405610.5725350.9742190.3190340.2946290.3297530.74711
Variability0.2253820.1718060.2472430.0569230.1390660.1264620.1061640.144724
Total Runtime (s)434.1653453.0578350.28621061.042172.8733171.7618284.8567787.6348

The Intel Xeon wins hands down on the FBProphet data, not shown. That is the most highly parallelized model – one process per series. Yet the Xeon is much slower on most others – since it has ARV512, the logical conclusion is the slow (2.8 GHz max) clock speed. The 1165G7 vs 10700 is perhaps the most interesting. The 10700 has a higher clock speed and twice and as many cores. The 1165G7 has only one clear advantage – ARV512 with Deep Learning Boost. Another interesting comparison is the 7700HQ vs the Xeon – here sometimes the chip with a higher clock speed is faster, and sometimes the one with more cores and ARV512 is faster. Overall, impossible to really say what single feature is most important, but my guess is: clock speed, instruction set, and more cores, in that order.

Leave a Comment

Your email address will not be published. Required fields are marked *