OSDev.org
https://forum.osdev.org/

Enabling compiler optimizations ruins the kernel
https://forum.osdev.org/viewtopic.php?f=1&t=56499
Page 2 of 5

Author:  devc1 [ Sat Sep 24, 2022 6:37 pm ]
Post subject:  Re: Enabling compiler optimizations ruins the kernel

Quote:
The short answer is : there are bugs in your code.

Microsoft says :
Stack-maintenance responsibility : Calling function pops the arguments from the stack.


In a __cdecl call to external assembly code, do I need to pop values by myself ?

Author:  Octocontrabass [ Sat Sep 24, 2022 9:00 pm ]
Post subject:  Re: Enabling compiler optimizations ruins the kernel

devc1 wrote:
Intermediate object files.

The linker hasn't run yet. When you link it into the final binary the jump should point to the correct address.

devc1 wrote:
I also disable intrinsics, because enabling them and removing my functions makes some : "unresolved reference to symbol : memset" :(
Maybe disabling intrinsics is the reason ?

Perhaps you should implement memset so the reference can be resolved. I don't know anything about Microsoft's compiler, but GCC and Clang require you to provide memset, memcpy, memove, and memcmp.

devc1 wrote:
Microsoft says :
Stack-maintenance responsibility : Calling function pops the arguments from the stack.

In a __cdecl call to external assembly code, do I need to pop values by myself ?

Microsoft's compiler silently ignores __cdecl when compiling x64 code. You need to follow the appropriate ABI for your compiler.

Why did you choose Microsoft's compiler, anyway? GCC works perfectly fine in Windows (with the help of either MSYS2 or WSL).

Author:  devc1 [ Sun Sep 25, 2022 5:24 am ]
Post subject:  Re: Enabling compiler optimizations ruins the kernel

I follow the calling convention of Microsoft, it's fine.

I've chosen MSVC to generate PE images instead of ELF, and be able to use DLLs and relocate easily. I've stolen the concept of my kernel being a DLL (like Windows ntoskrnl.exe) and linking it using oskrnlx64.lib (my krnl). The kernel detects it, reads it's own export table and sets the relative address of the imported symbols for the driver. In the past I had a huge table containing the addresses of the functions, in each new function I need to manually append the name and the address to that table.

The linker does the job automatically when the compiling finishes.

Implementing memset, memcpy... when intrinsics are enabled gives the following error : "error : intrinsic function "memset", cannot be redefined."

I am too lazy to wait for WSL each time, I just don't really like the idea from the start.

I think I should link to CRT, which will add some hundreds of Kilobytes to my kernel : (

Author:  devc1 [ Sun Sep 25, 2022 6:42 am ]
Post subject:  Re: Enabling compiler optimizations ruins the kernel

I have fixed the intrinsic problem by including the "msvcrt.lib" and "libvcruntime.lib", the memcpy works fine however the triple fault occurs (on real hardware (SSE only) also) here :
Code:
if(ExtensionLevel == EXTENSION_LEVEL_SSE) {
      while(1);
      return _SSE_ComputeBezier(...);
}


The fault does not happen on QEMU without HAXM (of course), it does happen with HAXM, on the old laptop, on WMWARE and VBOX.

The thing is fixed by just removing :
Code:
else if(ExtensionLevel == EXTENSION_LEVEL_AVX) {
   return _AVX_ComputeBezier(beta, NumCordinates, percent);
}

This compiler is really weird, however I don't need the AVX version of this now because I will implementing separately on the Graphics engine library.

Edit : even calling my assembly function and setting an infinite loop "jmp $" triple faults, Microsoft says that general optimizations are deprecated. So I will work without them.
Thanks for your help.

It turns out the problem is with the compiler not my code.

in the assembly output, intrinsics are working fine. I can see the "rep movsb" instead of the call to memcpy.

Devc1,

Author:  Octocontrabass [ Sun Sep 25, 2022 1:13 pm ]
Post subject:  Re: Enabling compiler optimizations ruins the kernel

devc1 wrote:
I am too lazy to wait for WSL each time, I just don't really like the idea from the start.

Huh? I don't understand, why would you need to wait for anything?

GCC (and Clang) can produce PE binaries by abusing the Windows target, if you insist on using PE. (ELF can have relocations too!) That's how I compile my UEFI bootloader.

Author:  devc1 [ Sun Sep 25, 2022 4:26 pm ]
Post subject:  Re: Enabling compiler optimizations ruins the kernel

I remember that enabling optimizations on GCC had the same problem.

I will need to check my kernel

Author:  AndrewAPrice [ Sun Sep 25, 2022 6:50 pm ]
Post subject:  Re: Enabling compiler optimizations ruins the kernel

Every time something broke from changing optimization options, it ended up being something wrong with my code or linker scripts and never the compiler.

Author:  iansjack [ Sun Sep 25, 2022 11:31 pm ]
Post subject:  Re: Enabling compiler optimizations ruins the kernel

AndrewAPrice wrote:
Every time something broke from changing optimization options, it ended up being something wrong with my code or linker scripts and never the compiler.

Agreed; a bad workman and all that.

Optimisation often reveals subtle bugs in seemingly good code. But ther are code-checking tools that will also point out such errors.

Author:  devc1 [ Mon Sep 26, 2022 4:44 am ]
Post subject:  Re: Enabling compiler optimizations ruins the kernel

Do you know some good code checking apps ?

Author:  iansjack [ Mon Sep 26, 2022 5:06 am ]
Post subject:  Re: Enabling compiler optimizations ruins the kernel

Enabling all warnings is a good start. A quick Google will find other tools.

Author:  rdos [ Tue Sep 27, 2022 12:43 am ]
Post subject:  Re: Enabling compiler optimizations ruins the kernel

devc1 wrote:
I follow the calling convention of Microsoft, it's fine.

I've chosen MSVC to generate PE images instead of ELF, and be able to use DLLs and relocate easily. I've stolen the concept of my kernel being a DLL (like Windows ntoskrnl.exe) and linking it using oskrnlx64.lib (my krnl). The kernel detects it, reads it's own export table and sets the relative address of the imported symbols for the driver. In the past I had a huge table containing the addresses of the functions, in each new function I need to manually append the name and the address to that table.



kernel.dll in Windows really isn't the kernel at all. It's just an interface DLL in userspace that interface the kernel. I don't think Windows support DLLs in kernel space, but I could be wrong.

I register my kernel & user accessible functions in a table with names & entry points. Then I create calling prototypes for them from assembly & C that contain an invalid instruction. This instruction then is patched either to a syscall, a call gate or a direct call. I think this is a lot better than dynamic linking using DLLs.

Author:  devc1 [ Tue Sep 27, 2022 1:52 pm ]
Post subject:  Re: Enabling compiler optimizations ruins the kernel

What a DLL has to do with system calls ?

A DLL (Dynamic link library) is just like the static library you link to your programs using ld. But the dynamic library does not get linked on compile time, it gets linked by the OS in runtime.

How is that beneficial ? Well this solves a lot of problems, for example security updates or syscalls and other changes in the kernel will make other apps unusable. Instead of recompiling all the apps, I can just edit and recompile the DLL on My new OS version and every app will just work fine as previous.

Notice how the NT Kernel changes its tables, system calls and functions in almost each version. Without that, applications will become unusable on the new version.

Author:  nullplan [ Tue Sep 27, 2022 8:51 pm ]
Post subject:  Re: Enabling compiler optimizations ruins the kernel

devc1 wrote:
What a DLL has to do with system calls ?

A DLL (Dynamic link library) is just like the static library you link to your programs using ld. But the dynamic library does not get linked on compile time, it gets linked by the OS in runtime.

How is that beneficial ? Well this solves a lot of problems, for example security updates or syscalls and other changes in the kernel will make other apps unusable. Instead of recompiling all the apps, I can just edit and recompile the DLL on My new OS version and every app will just work fine as previous.

Notice how the NT Kernel changes its tables, system calls and functions in almost each version. Without that, applications will become unusable on the new version.
Of course, Linux achieves the same thing at the syscall level, by simply only ever appending to the syscall table. No dynamic linking required. Only one architecture ever experienced an update to the syscall mechanism, and the update remains optional to this day, you just get worse performance.

Author:  devc1 [ Wed Sep 28, 2022 2:24 am ]
Post subject:  Re: Enabling compiler optimizations ruins the kernel

This is sooo inefficient according to my optimizing plans.

Why ? Because system calls are slow.
How I am fixing it ? These DLLs will contain heap management, IPC and other features implemented in user space and if a program decides to ruin them it will only damage the process and not the OS.

for e.g. instead of a syscall at every malloc, the DLL will contain the tables (at an isolated area) and if it needs more pages then it will syscall.

Why a syscall at every IPC Send/Get request when we can just share some pages between processes and communicate without system calls.

These are my plans, am I a genious ? Just kidding ..

Author:  iansjack [ Wed Sep 28, 2022 2:33 am ]
Post subject:  Re: Enabling compiler optimizations ruins the kernel

devc1 wrote:
for e.g. instead of a syscall at every malloc, the DLL will contain the tables (at an isolated area) and if it needs more pages then it will syscall.
Does any OS use system calls for a (user process) malloc? Other than extending heap space, if it runs out, why would you need a system call for this?

Page 2 of 5 All times are UTC - 6 hours
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group
http://www.phpbb.com/