Hi,
LtG wrote:
Brendan wrote:
Intel create a CPU in 5 years time that has a minor flaw in the way it implements SYSCALL and you want to avoid that flaw
Intel creating a CPU with minor flag in CALL is about as likely.
The CALL instruction is relatively simple. SYSCALL is more complex and has multiple edge cases, where it might be fine most of the time but on rare occasions several condition occur at the same time and you end up with something like
this or
this.
LtG wrote:
Difference is SYSCALL can be disabled and then you can presumably (haven't tested) catch the #UD.
Yes, but that'd give far worse performance.
LtG wrote:
But is it really good practice to prepare for future bugs?
For software that hopes to be around for a while; it's good practice to design it with some flexibility, so that if things change in future you have a way to deal with those changes (that doesn't break backward compatibility). It doesn't matter if things change because you want to add new features to your kernel, or if you discover a way to optimise something, or if you need to fix a bug in your software, or if Intel added something better to the CPU, or if you need to work around a bug in a CPU.
LtG wrote:
I thought about replying to the rest but it just seemed like splitting hairs.. So instead I thought I'd mention that I'm thinking about using byte-code ultimately and as such this is (for me) mostly a moot point. Which brings me to my actual point, it really depends on the circumstances, and that I wouldn't worry about this low level details for now at all (and personally haven't code wise, though I've thought about them), so use which ever you prefer and is easier to implement. With DLL you might save minimal re-compilation time (a lot if you keep changing your syscall lib, but I'd expect that to be rare even during development).
Quote:
forget about normal users because "compile before use" is too painful/annoying/fragile for languages like C).
Nonsense, it's a tooling problem, nothing more. Since it's your OS you can fix that (not that it's OS specific to begin with), if your OS toolset works perfectly then everyone else has to follow suit (or cease to exist).
In theory it's possible to drive half way across a bridge; but in practice nobody ever does - they either don't drive across the bridge at all, or they drive all the way to the other side.
In theory it's possible to create tools that fix all the "compile before use is too painful/annoying/fragile for C" problems; but in practice nobody ever will - they either won't be willing to write their own tools at all, or they'll go all the way and abandon C itself (e.g. create their own language).
LtG wrote:
Quote:
I'm not sure that C can be used like that (pre-processing can ruin portability before the compiler does anything). Of course there are other languages that don't have that problem (Java, C#, ...).
Language doesn't have that kind of effect, it's compiler/tooling problem. A language is just a language and doesn't have impact on that.
No; "pre-processing to work around portability problems" is a language problem with multiple causes (implementation defined behaviours, lack of standardisation for things like networking and GUI, poor primitive types, multiple versions of the language itself, etc). You can invent a "C like" language that doesn't need a pre-processor, but that language will not be C anymore.
Cheers,
Brendan