While the iPhone XS, iPhone XS Max, and iPhone XR used a standard A12 Bionic, Apple’s 2018 iPad Pro followed in the footsteps of prior iPad Pro models with the A12X, adding additional cores and more RAM to handle the demands of the larger tablet device.
In fact, Apple had done this so many times with its A-series chips — starting with the A8X in the 2014 iPad Air 2 — that most of us felt it was a given that the 2020 iPad Pro would continue the trend into the A13X. Instead, we got the A12Z, and we really weren’t quite sure what to make of this new chip.
Apple’s announcement had a bit of “marketing-speak” that suggested it might offer some optimizations for the new LiDAR Scanner, but the statement, “enhanced by computer vision algorithms on the A12Z Bionic for a more detailed understanding of a scene” was somewhat vague at best, and doesn’t suggest that these optimizations translate to faster speeds.
How Much Faster Is the 2020 iPad Pro?
Now that the new iPad Pro is out in the wild the benchmarks have started to come in, and not surprisingly, the performance improvement is so slight that we suspect that most users won’t notice any practical difference between the 2018 iPad Pro and the new 2020 models.
According to benchmarks shared by iPhone in Canada, the 2020 iPad Pro scored 712,218 points in an AnTuTu benchmark, as compared to the 705,585 score from the A12X Bionic found in the 2018 iPad Pro, but when boiling it right down to the core CPU performance, the numbers got even closer: 187,572 for the A12Z versus 186,186 for the A12X.
The bulk of the otherwise slight performance increase seems to be concentrated on the GPU, which scored 373,781 as opposed to 345,016 for the A12X Bionic. This makes some sense as Apple did explain that the A12Z Bionic features an 8-core GPU, while the A12X only had seven GPU cores.
So What Is the A12Z, Anyway?
In fact, a new report by Notebook Check suggests that the A12Z is actually just a renamed A12X with an enabled GPU core. It turns out that the A12X Bionic actually had 8 physical GPU Cores all along, but for whatever reason Apple had left one of those cores disabled in the 2018 iPad Pro, which was the only device to ever use the A12X chip.
In other words, Apple most likely didn’t design an entirely new variation on the A12X for the 2020 iPad Pro, but rather took its existing design and just “switched on” the eighth GPU core. Notebook Check speculates that this may have simply been done to save Apple from having to develop an entirely new A13X chip so late in the cycle. After all, it’s almost certain that we’re going to see an A14 chip arrive with the new iPhones this fall, and Apple could simply choose to skip over the A13X and go right to the A14X when the next lineup of iPad Pro models arrives, which some reports suggest could still be later this year.
Of course, we won’t know for sure until someone conducts a more in-depth “floorplan analysis” of the A12Z, but it seems like as good of an explanation as any.
What It All Means
In practical terms, this just adds more evidence that the 2020 iPad Pro is really an incremental upgrade from the 2018 lineup, and all you’re really getting is the LiDAR Scanner and the extra ultra wide camera.
That said, we’ve never heard any performance complaints about the older A12X-equipped 2018 iPad Pro, so we doubt anybody really cares about buying a new model because it’s faster. Apple’s A12X Bionic chip was already faster than just about every other mobile chip out there, and even the additional power added by the A13 on the iPhone 11 isn’t really felt by the end user — it’s there to power advanced machine learning and computational photography features.
Further, Apple’s iPad Air and iPad mini only feature the same A12 CPU found in the iPhone XS and iPhone XR, and the new 10.2-inch iPad released last fall is still sporting the 2017-era A10, as is the seventh-generation iPod touch.
When it comes to CPU and GPU performance on Apple devices, we entered a point of diminishing returns a few years ago — probably around the time of the A10 — and it’s no longer about building faster CPUs directly for the user experience, but rather using all of that new overhead to power the kind of features that we once wouldn’t have dreamed possible.