Earlier today we published our first results using GLBenchmark 2.5, the long awaited update to one of our most frequently used mobile GPU benchmarks. As a recap, here's a quick introduction to the new benchmark:

GLBenchmark 2.5.0 primarily addresses a few shortcomings from the previous version of the Egypt test, primarily by moving to a more challenging version of Egypt. The new game test is named Egypt HD, and moves to a much more complex scene while keeping roughly the same test animation. Geometry count has increased, texture resolution has increased, there's a new water shader, more reflections, and more shadowing. In addition, the offscreen test has now moved to a default resolution of 1080p instead of the previous 720p, for a more challenging workload. This offscreen resolution is now customizable, but we'll be running 1080p for ease of comparison. In addition, the "classic" Egypt test is also a part of GLBenchmark 2.5.0 for those wishing to compare to 2.1.5. In addition, the triangle and fill subtests also stay around for a lower level look at OpenGL ES 2.0 performance. It should go without saying, but GLBenchmark 2.5 is still an OpenGL ES 2.0 test. 

In our first article we ran GLBenchmark 2.5 on devices based on Samsung's Exynos dual and quad SoCs, Qualcomm's Snapdragon S4, NVIDIA's Tegra 2/3 and TI's OMAP 4. We had problems getting older devices to run, which is why we only had the abridged set of starting data on Android. In addition, GLBenchmark 2.5 for Android only supports Android 3.x and up. 

GLBenchmark 2.5 is already available in the Google Play store, however the iOS version isn't quite ready for public release. Thankfully we've been able to get our hands on the iOS version and now have results for the new iPad, iPad 2 as well as the iPhone 4S. Just as before we've split results into tablet and smartphone performance. Let's tackle the tablets first.

Tablet Performance in GLBenchmark 2.5

Apple's A5X SoC is a beast as we found in our investigation of the chip earlier this year. The SoC marries a quad-core PowerVR SGX 543 with a quad-channel memory controller, good for up to 12.8GB/s of memory bandwidth. The combination of the two is a GPU that significantly outperforms anything else on the market today.

The A5X's dominance extends to GLBenchmark 2.5. Low level performance ranges from 33% faster than NVIDIA's Tegra 3 on the low end to over 3x the performance at the high end. Even Apple's A5 found in the iPad 2 tends to be the second fastest SoC in these tests.

GLBenchmark 2.5 - Fill Test

GLBenchmark 2.5 - Fill Test (Offscreen 1080p)

GLBenchmark 2.5 - Triangle Texture Test

GLBenchmark 2.5 - Triangle Texture Test (Offscreen 1080p)

GLBenchmark 2.5 - Triangle Texture Test - Vertex Lit

GLBenchmark 2.5 - Triangle Texture Test - Vertex Lit (Offscreen 1080p)

GLBenchmark 2.5 - Triangle Texture Test - Fragment Lit

GLBenchmark 2.5 - Triangle Texture Test - Fragment Lit (Offscreen 1080p)

Egypt HD continues to be unrelentless in its punishment of mobile SoCs. The A5X is able to just keep up with the A5 thanks to the new iPad's significantly higher native resolution. Even the A5 running at 1024 x 768 can only muster 22 fps in the new Egypt HD test. It's going to take another generation of mobile GPUs to really sustain playable frame rates here.

The offscreen test runs without vsync enabled and at a standard 1080p resolution for all devices. In the case of the new iPad this works out to be a significantly lighter workload, which is responsible for the 20% higher frame rate. Even at 1080p though, the A5X isn't enough to sustain 30 fps.

GLBenchmark 2.5 - Egypt HD

GLBenchmark 2.5 - Egypt HD (Offscreen 1080p)

The classic Egypt benchmark once again gives us a good indication of present day gaming performance, which for the most part is fine on any of the latest SoCs. 

GLBenchmark 2.5 - Egypt Classic

GLBenchmark 2.5 - Egypt Classic (Offscreen 1080p)

Smartphone Performance in GLBenchmark 2.5
Comments Locked


View All Comments

  • darkcrayon - Wednesday, August 1, 2012 - link

    Designed by Apple.

    Doesn't Samsung pretty much reserve the Exynos exclusively for (some of) their own tablets and phones?
  • UpSpin - Thursday, August 2, 2012 - link

    The Meizu MX uses an Exynos. I think Samsung will gladly license their SoCs to others, but currently the others prefer Krait, OMAP, Tegra, ... Probably because Samsung doesn't promote their Exynos and does not support it as good for third parties as the others do with their SoCs.
  • UltraTech79 - Wednesday, August 8, 2012 - link

    Yeah so sad, unless you're not full of yourself and buy the product that is best regardless of your weak bs brand loyalties.
  • Draiko - Wednesday, August 1, 2012 - link

    Could there be a nasty performance hit due to Android?

    Here's what I'm seeing:

    Based on specs, clockspeeds, and resolution differences, the Apple A5 (SGX543MP2) in the iPad 2 should be posting ~2.5x-3x better performance vs the OMAP4430's SGX540 GPU in the Xyboard.

    The A5 seems to be posting 4x-5x performance over the OMAP4430.

    There are very few differences between the SGX540 and SGX543 on paper... the 543 supporting OpenGL 2.1 vs 2.0 for example but it doesn't seem like those differences would explain the actual performance differences we're seeing here.

    Am I missing something or could this suggest that the hardware isn't really showing its' true colors on Android?
  • UpSpin - Thursday, August 2, 2012 - link

    The SGX543MP2 is three times faster than the SGX540.
    Now you can change the clock speed to further improve things. And then there's the thing with the memory. The wider the memory bandwidth the faster the GPU (see performance boost in asus infinity due to the faster RAM). I don't know the numbers, but I it's likely that the A5 has more memory bandwidth.

    And finally it's the software. I don't think it's right to say: 'performance hit due to Android', because they use a native programming language to code the benchmarks on Android, so no VM overhead. It's more how they optimize it for the specific operating system. Because the source code is unknown they can do whatever they want, they don't have to stay fair or invest the same amount of energy in optimizing the app for the Tegra 3 or Omap processor as they did for the Apple SoCs. Such benchmarks, especially if they get run on totally different and incompatible operating systems are always difficult to compare. Additionally do different GPUs feature different special features, which enhance the image quality. But how can you compare those incompatible features with other SoCs which use different techniques? So those benchmarks only test the raw processing power, games might still look worse on a faste GPU due to the lack of those special features (see comparisons between Tegra 3 and iOS versions of games, the same games for Tegra 3 have water effects, smoke, ..., while the other versions don't)
  • mevensen - Friday, August 3, 2012 - link

    Can't that be said about most Android software vs iOS software as well? iOS provides a mouch more limited set of configurations that need to be addressed by developers for optimal performance. Android developers have to account for much wider variety, decreasing the likelihood that any one product will be optimized (possible exception being those products under the Tegra banner, that receive special attention/support by nVidia). This means that the synthetic benchmarks actually mimic the marketplace in that iOS software is more likely to be optimized vs an Android counterpart..
  • shaolin95 - Friday, August 3, 2012 - link

    My International Note overlocking the GPU to 400mhz gets 51fps on that last Egypt test offscreen.
    That is the reason I am holding for a Note 2 and hopefully featuring an upgrade in GPU.

Log in

Don't have an account? Sign up now