We achieved this feat using y-cruncher, a Pi-benchmark program developed by Alexander J. Yee, using a Google Compute Engine virtual machine cluster. 31.4 trillion digits is almost 9 trillion digits more than the previous world record set in November 2016 by Peter Trueb. Yee independently verified the calculation using Bellard's formula and BBP formula. Here are the last 97 digits of the result.

`6394399712 5311093276 9814355656 1840037499 3573460992`

`1433955296 8972122477 1577728930 8427323262 4739940`

You can read more details of this record from y-cruncher's perspective in Yee’s report.

**A constant race**

Granted, most scientific applications don’t need π beyond a few hundred digits, but that isn’t stopping anyone; starting in 2009, engineers have used customized personal computers to calculate trillions of digits of π. In fact, the race to calculate more π digits has only accelerated as of late, with computer scientists using it as a way to test supercomputers, and mathematicians to compete against one another.

However, the complexity of Chudnovky's formula—a common algorithm for computing π—is *O*(*n* (log *n*)^{3}). In layman’s terms, this means that the time and resources necessary to calculate digits increase more rapidly than the digits themselves. Furthermore, it gets harder to survive a potential hardware outage or failure as the computation goes on.

For our π calculation, we decided to go to the cloud. Using Compute Engine, Google Cloud’s high-performance infrastructure as a service offering, has a number of benefits over using dedicated physical machines. First, Compute Engine’s live migration feature lets your application continue running while Google takes care of the heavy lifting needed to keep our infrastructure up to date. We ran 25 nodes for 111.8 days, or 2,795 machine-days (7.6 machine-years), during which time Google Cloud performed thousands of live migrations uninterrupted and with no impact on the calculation process.

Running in the cloud also let us publish the computed digits entirely as disk snapshots. In less than an hour and for as little as $40/day, you can copy the snapshots, work on the results, and dispose of the computation resources. Before cloud, the only feasible way to distribute such a large dataset was to ship physical hard drives.

Then there are the general benefits of running in the cloud: availability of a broad selection of hardware, including the latest Intel Skylake processors with AVX-512 support. You can scale your instances up and down on demand, and kill off when you are done with them, only having paid for what you used.

Here are additional details about the program: