iOS 12 Core ML Benchmarks

By Jameson Toole

With the A11 Bionic’s Neural Engine in last year’s iPhone X, Apple introduced its first chip for hardware-accelerated AI. This year, Apple promised huge performance increases to Core ML in their iOS 12 update. They claim models run 9X faster and use 1/10th the energy on the new A12 Bionic processor. Early reports suggested that Apple has delivered, but I wanted to be sure.

At Fritz, we collect performance data every time a model is run on a user’s device to make sure that experiences are consistent. I went and looked at real world data from our open-source Heartbeat app to see how each Apple device stacked up.

Core ML performance by device. Higher is better. Not the y-axis is logarithmic. Data from Fritz.

This Core ML model runs over 10X faster on the A12 processor in the iPhone XS Max compared with the iPhone X. The model above performs object detection, and results vary from model to model. The smallest speed-up I saw was around 5x. I also found it interesting that the A10X Fusion processor in the 2018 iPad beat out the iPhone X. In other benchmarks, the processors appear fairly similar, but perhaps there are differences in memory.

I also noticed that the transition from iOS 11 to iOS 12 improved performance of existing models on every device.

Core ML models run 38% faster on iOS 12 compared to iOS 11. Lower is better. Data from Fritz.

Core ML models run 38% faster on iOS 12 compared to iOS 11.

It’s not often that we see 1000% increases in performance of a technology year over year. We’re just at the beginning of an incredible wave of mobile experiences powered by on-device machine learning. Processors like the A12 are going to make it happen.

Discuss this post on Hacker News

If you want measure the performance of Core ML and TensorFlow models in your app, get started with Fritz today.