How to harness the Powers of the Cloud TPU?



Introduction


About a couple of months ago, I got introduced to Google Colab (an enhanced version of JuPyteR notebooks) and since then didn’t look back. It’s got everything and more you need as a researcher. The initial idea was to load up python notebooks and run them on it. Soon we realised we can actually run those notebooks not just on CPUs available on GCP but also GPUs and TPUs. Also, read up a bit about it from other sources, see Google Reveals Technical Specs and Business Rationale for TPU Processor (slightly dated, but definitely helpful).


Here we go now…


What is a TPU?

Just like a GPU is a graphics accelerator ASIC (to help create graphics/images quickly for output to a display device) — which since long have been discovered that can be taken advantage of by using it for massive number crunching. Similarly, a TPU is an AI accelerator ASIC developed by Google specifically for Neural Network Machine Learning. One of the differences being TPUs are more about processing high-volume low-precision computation while GPUs for high-volume high-precision computation (please check the Wikipedia links for the interesting differences).


Then what?


TL;DR — how the notebooks and slides came about


To understand how these devices work, what better approach can we adopt than to benchmark them and then compare the results. Which is why we had a number of notebooks that came out of these experiments. And then we were also coincidentally preparing for Google Cloud Next 2018 at the Excel Center, London. And I spent an evening and a weekend preparing the slides for the talk Yaz and I were asked to give — Harnessing the Powers of Cloud TPU.


What happened first, before the other thing happened?


TL;DR — how we work at the GDG Cloud meetup events


It was a mere coincidence while we were meeting regularly at the GDG Cloud meetups in the London chapter, where Yaz would find interesting things to look at during the hack sessions (Pomodoro sessions — as he called them), suggested one session that we play with TPUs and benchmark them. Actually, I remember suggesting we do this during our sessions as an idea, but then we all got distracted with other equally interesting ideas (all saved somewhere on GitLab). And then I frantically started playing with two notebooks related to GPU and TPU respectively, provided as examples by Colab, which I then got to work on the TPU and then adapted it to work on the GPU (can’t remember anymore which way first). They both were doing slightly different things and measuring the performance of the GPU and TPU and I decided to make them do the same thing and measure the time taken to do it on the different device. Also, display details about the devices themselves (you will see towards the top or bottom of each notebook).


CPU v/s GPU v/s TPU — Simple benchmarking example via Google Colab


CPU v/s GPU — Simple benchmarking


The CPU v/s GPU — Simple benchmarking notebook finish processing with the below output:

TFLOP is a bit of shorthand for “teraflop”, which is a way of 
measuring the power of a computer based more on mathematical 
capability than GHz. A teraflop refers to the capability of a 
processor to calculate one trillion floating-point operations
per second.
CPU TFlops: 0.53 
GPU speedup over CPU: 29x                    TFlops: 15.70

I was curious about the internals of the CPU and GPU so I ran some Linux commands (via Bash) that the notebooks allow (thankfully) and got these bits of info to share:



You can find all those commands and the above output in the notebook as well.


CPU v/s TPU — Simple benchmarking


The TPU — Simple benchmarking notebook finish processing with the below output:

TFLOP is a bit of shorthand for “teraflop” which is a way of 
measuring the power of a computer based more on mathematical 
capability than GHz. A teraflop refers to the capability of a 
processor to calculate one trillion floating-point operations
per second.
CPU TFlops: 0.47
TPU speedup over CPU (cold-start): 75x        TFlops: 35.47 
TPU speedup over CPU (after warm-up): 338x    TFlops: 158.91

Unfortunately, I haven’t had a chance to play around with the TPU profiler yet to learn more about the internals of this fantastic device.

While there is room for errors and inaccuracies in the above figures, you might be curious about the tasks used for all the runs — it’s the below piece of code that has been making the CPU, GPU and TPU circuitry for the Simple benchmarking notebooks:


def cpu_flops():
  x = tf.random_uniform([N, N])
  y = tf.random_uniform([N, N])
  def _matmul(x, y):
    return tf.tensordot(x, y, axes=[[1], [0]]), y
return tf.reduce_sum(
    tf.contrib.tpu.repeat(COUNT, _matmul, [x, y])
  )

CPU v/s GPU v/s TPU — Time-series prediction via Google Colab


Running the