After re-reading your question, I think we were momentarily talking past each other. I'll do my best to answer your question.
There are a few distinct advantages to having a "higher half" kernel.
1. User space is a lot cleaner. On 64-bit, it basically means any address without the top bit set is user space, any address with it set is kernel space. Easy delineation.
2. User space addresses start at 0! This means that 32-bit applications can be supported, since they can't reasonably be above the 4GB mark. Sure you could load the code high, as long as it is PIC and put the data low. But that's just messy. If we KNOW the kernel is in the upper half, we can just load the 32-bit program where it wants to be.
So, we have some good reasons to want the kernel high, but why at -2GB specifically? Well, if you don't, you will run into issues with the code model. gcc (and I assume clang) will use different code models to determine how relocation work when linking. You can specify "large", which means that an address can be "anywhere", so it must emit code which is basically like this:
Code:
mov address, %eax
callq *%eax
Which is terribly inefficient, but will work...
If we specify "small", it'll do things sanely, but you may run into mysterious linker issues about "relocation truncated to fit: R_X86_64_32..." which can be terrible to deal with. My motivation for this post was my kernel compiled fine in debug builds, but not release builds because gcc was choosing different relocation sizes than I wanted; and I was getting errors during linking
.
The solution is the "kernel" model, which is what the Linux kernel uses. It tells the compiler to "assume the code is loaded at -2GB", allowing it to use 32-bit relocations for all calls safely. You can still put data wherever you like of course. This makes things with the linker "just work".
I hope this answers your question.