OSDev.org

The Place to Start for Operating System Developers
It is currently Thu Mar 28, 2024 4:26 am

All times are UTC - 6 hours




Post new topic Reply to topic  [ 62 posts ]  Go to page Previous  1, 2, 3, 4, 5  Next
Author Message
 Post subject: Re: Enabling compiler optimizations ruins the kernel
PostPosted: Sat Sep 24, 2022 6:37 pm 
Offline
Member
Member

Joined: Fri Feb 11, 2022 4:55 am
Posts: 435
Location: behind the keyboard
Quote:
The short answer is : there are bugs in your code.

Microsoft says :
Stack-maintenance responsibility : Calling function pops the arguments from the stack.


In a __cdecl call to external assembly code, do I need to pop values by myself ?


Top
 Profile  
 
 Post subject: Re: Enabling compiler optimizations ruins the kernel
PostPosted: Sat Sep 24, 2022 9:00 pm 
Offline
Member
Member

Joined: Mon Mar 25, 2013 7:01 pm
Posts: 5099
devc1 wrote:
Intermediate object files.

The linker hasn't run yet. When you link it into the final binary the jump should point to the correct address.

devc1 wrote:
I also disable intrinsics, because enabling them and removing my functions makes some : "unresolved reference to symbol : memset" :(
Maybe disabling intrinsics is the reason ?

Perhaps you should implement memset so the reference can be resolved. I don't know anything about Microsoft's compiler, but GCC and Clang require you to provide memset, memcpy, memove, and memcmp.

devc1 wrote:
Microsoft says :
Stack-maintenance responsibility : Calling function pops the arguments from the stack.

In a __cdecl call to external assembly code, do I need to pop values by myself ?

Microsoft's compiler silently ignores __cdecl when compiling x64 code. You need to follow the appropriate ABI for your compiler.

Why did you choose Microsoft's compiler, anyway? GCC works perfectly fine in Windows (with the help of either MSYS2 or WSL).


Top
 Profile  
 
 Post subject: Re: Enabling compiler optimizations ruins the kernel
PostPosted: Sun Sep 25, 2022 5:24 am 
Offline
Member
Member

Joined: Fri Feb 11, 2022 4:55 am
Posts: 435
Location: behind the keyboard
I follow the calling convention of Microsoft, it's fine.

I've chosen MSVC to generate PE images instead of ELF, and be able to use DLLs and relocate easily. I've stolen the concept of my kernel being a DLL (like Windows ntoskrnl.exe) and linking it using oskrnlx64.lib (my krnl). The kernel detects it, reads it's own export table and sets the relative address of the imported symbols for the driver. In the past I had a huge table containing the addresses of the functions, in each new function I need to manually append the name and the address to that table.

The linker does the job automatically when the compiling finishes.

Implementing memset, memcpy... when intrinsics are enabled gives the following error : "error : intrinsic function "memset", cannot be redefined."

I am too lazy to wait for WSL each time, I just don't really like the idea from the start.

I think I should link to CRT, which will add some hundreds of Kilobytes to my kernel : (


Top
 Profile  
 
 Post subject: Re: Enabling compiler optimizations ruins the kernel
PostPosted: Sun Sep 25, 2022 6:42 am 
Offline
Member
Member

Joined: Fri Feb 11, 2022 4:55 am
Posts: 435
Location: behind the keyboard
I have fixed the intrinsic problem by including the "msvcrt.lib" and "libvcruntime.lib", the memcpy works fine however the triple fault occurs (on real hardware (SSE only) also) here :
Code:
if(ExtensionLevel == EXTENSION_LEVEL_SSE) {
      while(1);
      return _SSE_ComputeBezier(...);
}


The fault does not happen on QEMU without HAXM (of course), it does happen with HAXM, on the old laptop, on WMWARE and VBOX.

The thing is fixed by just removing :
Code:
else if(ExtensionLevel == EXTENSION_LEVEL_AVX) {
   return _AVX_ComputeBezier(beta, NumCordinates, percent);
}

This compiler is really weird, however I don't need the AVX version of this now because I will implementing separately on the Graphics engine library.

Edit : even calling my assembly function and setting an infinite loop "jmp $" triple faults, Microsoft says that general optimizations are deprecated. So I will work without them.
Thanks for your help.

It turns out the problem is with the compiler not my code.

in the assembly output, intrinsics are working fine. I can see the "rep movsb" instead of the call to memcpy.

Devc1,


Top
 Profile  
 
 Post subject: Re: Enabling compiler optimizations ruins the kernel
PostPosted: Sun Sep 25, 2022 1:13 pm 
Offline
Member
Member

Joined: Mon Mar 25, 2013 7:01 pm
Posts: 5099
devc1 wrote:
I am too lazy to wait for WSL each time, I just don't really like the idea from the start.

Huh? I don't understand, why would you need to wait for anything?

GCC (and Clang) can produce PE binaries by abusing the Windows target, if you insist on using PE. (ELF can have relocations too!) That's how I compile my UEFI bootloader.


Top
 Profile  
 
 Post subject: Re: Enabling compiler optimizations ruins the kernel
PostPosted: Sun Sep 25, 2022 4:26 pm 
Offline
Member
Member

Joined: Fri Feb 11, 2022 4:55 am
Posts: 435
Location: behind the keyboard
I remember that enabling optimizations on GCC had the same problem.

I will need to check my kernel


Top
 Profile  
 
 Post subject: Re: Enabling compiler optimizations ruins the kernel
PostPosted: Sun Sep 25, 2022 6:50 pm 
Offline
Member
Member
User avatar

Joined: Mon Jun 05, 2006 11:00 pm
Posts: 2293
Location: USA (and Australia)
Every time something broke from changing optimization options, it ended up being something wrong with my code or linker scripts and never the compiler.

_________________
My OS is Perception.


Top
 Profile  
 
 Post subject: Re: Enabling compiler optimizations ruins the kernel
PostPosted: Sun Sep 25, 2022 11:31 pm 
Offline
Member
Member
User avatar

Joined: Sat Mar 31, 2012 3:07 am
Posts: 4591
Location: Chichester, UK
AndrewAPrice wrote:
Every time something broke from changing optimization options, it ended up being something wrong with my code or linker scripts and never the compiler.

Agreed; a bad workman and all that.

Optimisation often reveals subtle bugs in seemingly good code. But ther are code-checking tools that will also point out such errors.


Top
 Profile  
 
 Post subject: Re: Enabling compiler optimizations ruins the kernel
PostPosted: Mon Sep 26, 2022 4:44 am 
Offline
Member
Member

Joined: Fri Feb 11, 2022 4:55 am
Posts: 435
Location: behind the keyboard
Do you know some good code checking apps ?


Top
 Profile  
 
 Post subject: Re: Enabling compiler optimizations ruins the kernel
PostPosted: Mon Sep 26, 2022 5:06 am 
Offline
Member
Member
User avatar

Joined: Sat Mar 31, 2012 3:07 am
Posts: 4591
Location: Chichester, UK
Enabling all warnings is a good start. A quick Google will find other tools.


Top
 Profile  
 
 Post subject: Re: Enabling compiler optimizations ruins the kernel
PostPosted: Tue Sep 27, 2022 12:43 am 
Offline
Member
Member

Joined: Wed Oct 01, 2008 1:55 pm
Posts: 3191
devc1 wrote:
I follow the calling convention of Microsoft, it's fine.

I've chosen MSVC to generate PE images instead of ELF, and be able to use DLLs and relocate easily. I've stolen the concept of my kernel being a DLL (like Windows ntoskrnl.exe) and linking it using oskrnlx64.lib (my krnl). The kernel detects it, reads it's own export table and sets the relative address of the imported symbols for the driver. In the past I had a huge table containing the addresses of the functions, in each new function I need to manually append the name and the address to that table.



kernel.dll in Windows really isn't the kernel at all. It's just an interface DLL in userspace that interface the kernel. I don't think Windows support DLLs in kernel space, but I could be wrong.

I register my kernel & user accessible functions in a table with names & entry points. Then I create calling prototypes for them from assembly & C that contain an invalid instruction. This instruction then is patched either to a syscall, a call gate or a direct call. I think this is a lot better than dynamic linking using DLLs.


Top
 Profile  
 
 Post subject: Re: Enabling compiler optimizations ruins the kernel
PostPosted: Tue Sep 27, 2022 1:52 pm 
Offline
Member
Member

Joined: Fri Feb 11, 2022 4:55 am
Posts: 435
Location: behind the keyboard
What a DLL has to do with system calls ?

A DLL (Dynamic link library) is just like the static library you link to your programs using ld. But the dynamic library does not get linked on compile time, it gets linked by the OS in runtime.

How is that beneficial ? Well this solves a lot of problems, for example security updates or syscalls and other changes in the kernel will make other apps unusable. Instead of recompiling all the apps, I can just edit and recompile the DLL on My new OS version and every app will just work fine as previous.

Notice how the NT Kernel changes its tables, system calls and functions in almost each version. Without that, applications will become unusable on the new version.


Top
 Profile  
 
 Post subject: Re: Enabling compiler optimizations ruins the kernel
PostPosted: Tue Sep 27, 2022 8:51 pm 
Offline
Member
Member

Joined: Wed Aug 30, 2017 8:24 am
Posts: 1593
devc1 wrote:
What a DLL has to do with system calls ?

A DLL (Dynamic link library) is just like the static library you link to your programs using ld. But the dynamic library does not get linked on compile time, it gets linked by the OS in runtime.

How is that beneficial ? Well this solves a lot of problems, for example security updates or syscalls and other changes in the kernel will make other apps unusable. Instead of recompiling all the apps, I can just edit and recompile the DLL on My new OS version and every app will just work fine as previous.

Notice how the NT Kernel changes its tables, system calls and functions in almost each version. Without that, applications will become unusable on the new version.
Of course, Linux achieves the same thing at the syscall level, by simply only ever appending to the syscall table. No dynamic linking required. Only one architecture ever experienced an update to the syscall mechanism, and the update remains optional to this day, you just get worse performance.

_________________
Carpe diem!


Top
 Profile  
 
 Post subject: Re: Enabling compiler optimizations ruins the kernel
PostPosted: Wed Sep 28, 2022 2:24 am 
Offline
Member
Member

Joined: Fri Feb 11, 2022 4:55 am
Posts: 435
Location: behind the keyboard
This is sooo inefficient according to my optimizing plans.

Why ? Because system calls are slow.
How I am fixing it ? These DLLs will contain heap management, IPC and other features implemented in user space and if a program decides to ruin them it will only damage the process and not the OS.

for e.g. instead of a syscall at every malloc, the DLL will contain the tables (at an isolated area) and if it needs more pages then it will syscall.

Why a syscall at every IPC Send/Get request when we can just share some pages between processes and communicate without system calls.

These are my plans, am I a genious ? Just kidding ..


Top
 Profile  
 
 Post subject: Re: Enabling compiler optimizations ruins the kernel
PostPosted: Wed Sep 28, 2022 2:33 am 
Offline
Member
Member
User avatar

Joined: Sat Mar 31, 2012 3:07 am
Posts: 4591
Location: Chichester, UK
devc1 wrote:
for e.g. instead of a syscall at every malloc, the DLL will contain the tables (at an isolated area) and if it needs more pages then it will syscall.
Does any OS use system calls for a (user process) malloc? Other than extending heap space, if it runs out, why would you need a system call for this?


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 62 posts ]  Go to page Previous  1, 2, 3, 4, 5  Next

All times are UTC - 6 hours


Who is online

Users browsing this forum: Bing [Bot], Google [Bot] and 59 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group