Windows 10 Feature Focus: .NET Nativeby Brett Howse on October 2, 2015 9:00 AM EST
- Posted in
- Windows 10
Programming languages seem to be cyclical. Low level languages give developers the chance to have very fast code with the minimum of commands necessary, but the closer you code to hardware the more difficult it becomes, and the developer should have a good grasp of the hardware in order to get the best performance. In theory, everything could be written in assembly language but that has some limitations. Programming languages, over time, have been abstracted from the hardware they run on which gives advantages to developers in that they do not have to micro-manage their code, and the code itself can be compiled against different architectures.
In the end, the processor just executes machine code, and job of moving from a developer’s mind to machine code can be done in a several ways. Two of the most common are Ahead-of-Time (AOT) and Just-in-Time (JIT) compilation. Each have their own advantages, but AOT can yield better performance because the CPU is not translating code on the fly. For desktops, this has not necessarily been a big issue since they generally have sufficient processing power anyway, but in the mobile space processors are much more limited in resources, especially power.
With .NET Native, Microsoft is now allowing C# code to be pre-compiled to native code, eliminating the need to fire up the majority of the .NET framework and runtime, which saves time at app launch. Visual Studio will now be able to do the compile to native code, but Microsoft is implementing quite a different system than Google has done with Android’s ART. Rather than having the developer do the compilation and upload to the cloud, and rather than have the code be compiled on the device, Microsoft will be doing the compilation themselves once the app is uploaded to the store. This will allow them to use their very well-known C++ compiler, and any tweaks they make to the compiler going forward will be able to be applied to all .NET Native apps in the store rather than having to get the developer to recompile. Microsoft's method is actually very similar to Apple's latest changes as well, since they will also be recompiling on their end if they make changes to their compiler.
They have added some pretty interesting functionality to their toolkit to enable .NET Native. Let’s take a look at the overview.
Starting with C# code, this is compiled into IL code using the same compiler that they use for any C# code running on the .NET framework as JIT code. The resulting Intermediate Language code is actually the exact same result as someone would get if they wanted to run the code as JIT. If you were not going to compile to native, you would stop right here.
To move this IL code to native code, next the code is run through an Intermediate Language Complier. Unlike a JIT compiler which runs on the fly when the code is running, the ILC can see the entire program and can therefore make larger optimizations than the JIT compiler would be able to do since it only sees a tiny portion of the code. ILC also has access to the .NET framework to add in the necessary code for standard functions. The ILC will actually create C# code for any WinRT calls made to avoid the framework having to be invoked during execution. That C# code is then fed back into the toolchain as part of the overall project so that all of these calls can be static calls for native code. The ILC then does transforms on any C# code that are required; C# code can rely on the framework for certain functions, so these need to be transformed since the framework will not be invoked once the app is compiled as native. The resulting output from the ILC is one file which contains all of the code, all of the optimizations, and any of the .NET framework necessary to run this app. Next, the single monolithic output is put through a Tree Shaker which looks at the entire program and determines what code is being used, and what is not, and expunging code that is never going to be called.
The resultant output from ILC.exe is Machine Dependent Intermediate Language (MDIL) which is very close to native code but with a light abstraction of the native code. MDIL contains placeholders which must be replaced before the code can be run by a process called binding.
The binder, called rhbind.exe, changes the MDIL into a final set of outputs which results in a .exe file and a .dll file. This final output is run as native code, but it still relies on a minimal runtime to be active which contains the garbage collector.
The process might seem pretty complicated, with a lot of steps, but there is a method to the madness. By keeping the developer’s code as an IL and treating it just like it would be non-native code, the debugging process is much faster since the IDE can run the code in a virtual machine with JIT and avoid having to recompile all of the code to do any debugging.
The final compile will be done in the cloud for any apps going to the store, but the tools will be available locally as well to assist with testing the app before it gets deployed to ensure there are no unseen consequences of moving to native code.
All of this work is done for really one reason. Performance. Microsoft’s numbers for .NET Native shows up to 60% quicker cold startup times for apps, and 40% quicker launch with a warm startup. Memory usage will also be lower since the operating system will not need to keep the runtime active at the same time. All of this is important for any system, but especially for low powered tablets and phones.
Microsoft has been testing .NET Native with two of their own apps. Both Wordament and Fresh Paint have been running as native code on Windows 8.1. This is the case for both the ARM and x86 versions of the app, and the compilation in the cloud ensures that the device downloading will be sure to get the correct executable for its platform.
.NET Native has been officially released with Visual Studio 2015, and is available for use with the Universal Windows App platform and therefore apps in the store. At this time, it is not available for desktop apps, although that is certainly something that could come with a future update. It’s not surprising though, since they really want to move to the new app platform anyway.
There is a lot more under the covers to move from managed code to native code, but this is a high level overview of the work that has been done to provide developers with a method to get better performance without having to move away from C#.
Post Your CommentPlease log in or sign up to comment.
View All Comments
Gigaplex - Friday, October 2, 2015 - linkThis is definitely a boon for app startup times, but isn't likely to make much of a difference in performance of compute heavy codepaths. It is extremely difficult to write C# code that is CPU cache friendly for example, the whole concept of everything being references makes it near impossible for the prefetcher to work. No amount of native compilation will fix that, the data structures themselves need to change.
I hope this comes to desktop applications soon, as the startup times on some of our large in-house C# projects at work are abysmal.
valeriob - Friday, October 2, 2015 - linkon the contrary, c# value types enables exactly that
Gigaplex - Friday, October 2, 2015 - linkC# value types often don't help much when you're in a loop because most of the container classes break cache locality. Then there's the boxing/unboxing overhead undoing some of your optimisations. Plus, if you strictly stick to value types, you lose a lot of the value of C# so may as well use something better suited for that task.
valeriob - Saturday, October 3, 2015 - linkGood point, but we're talking about general purpose application, and it's good enough, if you need more it's easy to link a c++ component for the heavy lifting :D
HollyDOL - Tuesday, October 6, 2015 - linkWith .NET 4.6 you can avoid this hit if you use SIMD/AVX2 compatibile looping with RyuJIT. Ofc it's not always possible or always easy but there is a way.
FieryUP - Friday, October 2, 2015 - linkI don't think CPU cache friendliness is all that important with C#. For performance code you might as well pick C++ over C#, or move the most performance critical code section to C++ while retaining the UX and general majority of the code in C#. VS2015 is very great at that, I do have a solution with 3 app projects for 3 different platforms (WP8.1, Win8.1 Store Apps, Win10 UWP), and 2 additional projects that are shared by the 3 app projects. One of the shared projects is C#, the other shared project is C++. It all works very smooth, no matter what app platform or target ISA I choose.
Cogman - Friday, October 2, 2015 - linkNot every type in C# is a reference type. There are also value types that you can define, although they have different rules from reference types. Further, you are allowed to do stack allocations in C#.
gamoniac - Monday, October 5, 2015 - linkFor compute-intensive tasks, C# does supports the use of unsafe block, which allows you to access unmanaged heap and use pointer and address of operators. While I have not personally benchmarked it, It is said to be faster that imvoking C functions directly using DLLImport() because it does not need to switch from the .Net framework environment.
danbob999 - Friday, October 2, 2015 - link"Microsoft will be doing the compilation themselves once the app is uploaded to the store"
I still don't understand why Google isn't doing the same. Remove the compiler from the devices, and compile in the cloud. There are only 2-3 targets anyways (ARM32, ARM64, x86-64)
Flunk - Friday, October 2, 2015 - linkIt's a whole different thing. High-level Android apps are already platform independent because they're written in Java. The whole point of the NDK is platform-specific native code.