OSDev.org

The Place to Start for Operating System Developers
It is currently Thu Mar 28, 2024 5:28 pm

All times are UTC - 6 hours




Post new topic Reply to topic  [ 64 posts ]  Go to page Previous  1, 2, 3, 4, 5  Next
Author Message
 Post subject: Re: Designs of microkernels
PostPosted: Fri Jan 15, 2021 1:14 pm 
Offline
Member
Member

Joined: Tue Feb 18, 2020 3:29 pm
Posts: 1071
bzt wrote:
No. First, mmap does not understand those flags only MAP_* ones (which are ignored), so setting PROT_* would take a separate mprotect call, and second POSIX says the MMU protection actually set for those flags are implementation specific (they might not do anything).

According to the Linux man page, the definition is
Code:
void *mmap(void *addr, size_t length, int prot, int flags, int fd, off_t offset);

It also states:
Quote:
The prot argument describes the desired memory protection of the
mapping (and must not conflict with the open mode of the file).
It is either PROT_NONE or the bitwise OR of one or more of the
following flags:

PROT_EXEC
Pages may be executed.

PROT_READ
Pages may be read.

PROT_WRITE
Pages may be written.

PROT_NONE
Pages may not be accessed.

_________________
"How did you do this?"
"It's very simple — you read the protocol and write the code." - Bill Joy
Projects: NexNix | libnex | nnpkg


Top
 Profile  
 
 Post subject: Re: Designs of microkernels
PostPosted: Fri Jan 15, 2021 3:09 pm 
Offline
Member
Member

Joined: Mon Mar 25, 2013 7:01 pm
Posts: 5100
bzt wrote:
Because it is highly inefficient?

A monolithic kernel is more efficient than a microkernel, but we still have microkernels.

bzt wrote:
How would that look like in practice at all? Webbrowsers forking two processes per tab (to have two separate address spaces for each tab, one for running js and one for the compiler)?

It's not very different from what they already do. (With two tabs open right now, my browser has seven processes.)


Top
 Profile  
 
 Post subject: Re: Designs of microkernels
PostPosted: Sat Jan 16, 2021 12:40 pm 
Offline
Member
Member
User avatar

Joined: Thu Oct 13, 2016 4:55 pm
Posts: 1584
nexos wrote:
According to the Linux man page, the definition is
You misunderstood. We are talking about two address spaces mapping the same memory differently, and the relevant part of the man page for that says
Code:
The flags argument determines whether updates to the mapping are visible to other processes mapping the same region
...
MAP_DENYWRITE
    This flag is ignored. (Long ago, it signaled that attempts to write to the underlying file should fail with ETXTBUSY. But this was a source of denial-of-service attacks.)
MAP_EXECUTABLE
    This flag is ignored.


Octocontrabass wrote:
It's not very different from what they already do. (With two tabs open right now, my browser has seven processes.)
And that is highly inefficient and wasteful already! Nobody every solved anything by making things worse :-) The smaller overhead of an interpreter would be a far better choice IMHO, and then you could have one process per tab, and the browser probably wouldn't need 1G of RAM to display a plain simple 1k HTML file.

Ok, let's have another typical example, a game engine which loads objects on-the-fly and uses JIT compiled event handlers in them. There the peaking overhead of forking and somehow applying back the info from the compiler process what to map as executable is just too much and unaffordable. Interpreter might be slower in general, but has a smaller and more flattened and constant overhead (therefore calculateable, no peaks). Minetest is a good example, you can compile it with Lua JIT as well as with interpreted Lua, despite of this the engine is shipped with interpreted Lua by default.

Finally, although I haven't though of this until now, you can write a portable interpreter, but it is not possible to write a portable JIT compiler. This can be a very good reason if your project is supposed to be multiplatform (however doesn't really matter for a kernel which has non-portable Assembly and platform-specific parts anyway).

Cheers,
bzt


Top
 Profile  
 
 Post subject: Re: Designs of microkernels
PostPosted: Sun Jan 17, 2021 3:38 am 
Offline
Member
Member

Joined: Wed Apr 01, 2020 4:59 pm
Posts: 73
bzt wrote:
moonchild wrote:
If you implement a safe language with the JIT, you can't have buffer overflows :)
As I've said, possible BOFs unrelated to JIT. BTW I haven't seen any WASM vm that verifies the bytecode and guarantees to generate 100% bullet-proof native code, and even wasmjit runs the compiled code in ring 0 (yeah, what could go wrong, right?). Do you know any WASM JIT compiler that actually verifies the code and places bound-checks in the generated code? Please let me know if there's any. (I'm not sarcastic, I'm genuinely curious)


I think you're conflating language with implementation.

A programming language is a set of rules that describe the behaviour of programs. These are the semantics. (There are also usually syntactic rules, describing how programs are constructed; but we don't care about those right now.) These rules are generally contained in the specification, if there is one. According to the rules of the c programming language, c programs can have buffer overflows[1]. On the other hand, the rules of java state that if you access outside the bounds of an array, an exception will be thrown and you wouldn't be able to read/write bad memory.

IOW c programs can have buffer overflows, and java programs cannot.

Assembly is also a programming language, and its semantics are described in the intel/amd SDM. A compiler is a tool which understands the rules of two different languages and creates a mapping from one to the other. E.G. from c to assembly. This means: according to the respective rules of c and assembly, the program as written in assembly should have the same semantics as the program written in assembly[2]. This has nothing to do with the fact that it is actually possible to run the assembly code; it has only to do with the rules that describe the behaviour of c and assembly.

So, there are two potential problems:

  1. Buffer overflows may be part of the semantics of the source language. So, an arbitrary piece of code written in the source language might contain buffer overflows and there is nothing we can do about that. Example: C.
  2. Buffer overflows may be part of the semantics of the destination language. Prima facie, this is not necessarily a problem. If the source language doesn't have buffer overflows, then the compiler should be constrained to avoid producing programs which contain buffer overflows. However, if the compiler has a bug, it may produce output which does not correspond to the input, and which does have buffer overflows. Example: V8 (js -> assembly). (This is usually only a problem if you need to run untrusted code. I don't think there are many examples of security bugs introduced into trusted code by compiler bugs, though the recent gcc-memcpy fiasco does come to mind.)

Problem 1 is solved by using a language which is proven to be safe, like java. Problem 2 is solved by using a compiler which has been proven to have no bugs. Note that these problems are actually orthogonal; for example, compcert is a formally verified compiler from c to assembly, and if your code contains buffer overflows, its output will too.

Wasm the language is safe for running in the kernel. (It doesn't guarantee boundschecking for array accesses, so it can corrupt its own data; but it's only allows to read/write from specific regions, so it's unable to corrupt its own generated code.) However the WASM VMs can still have bugs in them. The question is not whether the WASM compiler verifies the bytecode—it does; or, at least, tries to—the question is if it has bugs. (Spoiler: it does.)

Problem 2 applies to any type of programming language implementation.[3] It even applies to CPUs—there have been some pretty bad CPU bugs. The main problem is not executable memory, it's incorrect semantics applied to untrusted code. You would have (most) of the same problems with a compiler (and thinking about it more, I think AOT compilation with a signed cache might actually make more sense than JIT).

Getting back to the point—I don't know of anything specifically for wasm, but there are formally verified compilers, yes. I already mentioned compcert, but maybe more interesting is cakeml, a formally verified compiler for ml, which is a safe language. And the coq jit framework I linked earlier can probably implement wasm, which gives a path to a verified wasm jit; though again, I think aot might ultimately be a better path for a project like this one.

1. This is not technically true. If you access outside the bounds of an array, you get undefined behaviour. Which means it is possible to implement boundschecking for see (see, for instance asan). This doesn't affect the broader point, however; and most code is not compiled with asan.

2. Wherever the source language has undefined behaviour, the compiler has the option of turning that into defined behaviour in the target language. (If the target language has no undefined behaviour, then it will have to choose some behaviour.) IOW undefined behaviour in the source does not have to keep being undefined in the destination

3. Though, in the specific case when your userspace language is implemented by an interpreter which is implemented in a safe compiled language, the odds of that kind of a memory bug are pretty much nil. Because the interpreter is trusted code, and unlikely to be hit very badly by compiler bugs. However, most interpreters are quite slow...


Top
 Profile  
 
 Post subject: Re: Designs of microkernels
PostPosted: Sun Jan 17, 2021 6:22 pm 
Offline
Member
Member
User avatar

Joined: Thu Oct 13, 2016 4:55 pm
Posts: 1584
moonchild wrote:
I think you're conflating language with implementation.
No I'm not. I'm talking about something else. I'm saying that in order to allow writable and executable memory for JIT compilers, you might accidentally disable a security feature that could be taken advantage of in a totally unrelated part of your system by a malicious attacker. This is totally unrelated to language design and compiler / interpreter implementation, it's about not being careful enough, not restricting the JIT compiler's access correctly. It is a LOT harder to mess up with an interpreter where you never execute the data (but still possible, just a lot harder to mess up).

moonchild wrote:
IOW c programs can have buffer overflows, and java programs cannot.
You couldn't be more wrong about this... Think it again! FYI CVE-2020-2803 just to mention the latest from last year (see also CVE-2016-0264, and Oracle JRE also had some in 2013 and in 2016 too). There's plenty of room for buffer overflows in any VMs or JIT compilers (even in those which verify the bytecode). And the fact that almost all of those BOFs made arbitrary code execution possible means that java is most certainly not handling the memory access rights correctly.

Cheers,
bzt


Top
 Profile  
 
 Post subject: Re: Designs of microkernels
PostPosted: Sun Jan 17, 2021 7:23 pm 
Offline
Member
Member

Joined: Wed Apr 01, 2020 4:59 pm
Posts: 73
bzt wrote:
moonchild wrote:
I think you're conflating language with implementation.
No I'm not. I'm talking about something else. I'm saying that in order to allow writable and executable memory for JIT compilers, you might accidentally disable a security feature that could be taken advantage of in a totally unrelated part of your system by a malicious attacker. This is totally unrelated to language design and compiler / interpreter implementation, it's about not being careful enough, not restricting the JIT compiler's access correctly. It is a LOT harder to mess up with an interpreter where you never execute the data (but still possible, just a lot harder to mess up).


I understand that executable memory is a potential attack vector. However I stand by what I said: I don't think that's the most interesting potential attack vector; as long as you have to compile untrusted code, the fundamental possibility for miscompilation remains regardless of the state of your memory protections at a given time. Interpreters are easier to secure, yes; but usually a nonstarter for performance reasons.

bzt wrote:
moonchild wrote:
IOW c programs can have buffer overflows, and java programs cannot.
You couldn't be more wrong about this... Think it again! FYI CVE-2020-2803 just to mention the latest from last year (see also CVE-2016-0264, and Oracle JRE also had some in 2013 and in 2016 too). There's plenty of room for buffer overflows in any VMs or JIT compilers (even in those which verify the bytecode).


I don't think you read what I said. Please reread it carefully, otherwise we aren't going to get anywhere.

The java language can't have buffer overflows. An implementation of the language absolutely can, however. There were bugs in the JVM. (In that case, some the bugs also had to do with misuse of sun.misc.unsafe, which is not actually part of the language but an implementation detail of the standard library.) A formally verified language implementation is proven to have no bugs and thus no buffer overflows.


Top
 Profile  
 
 Post subject: Re: Designs of microkernels
PostPosted: Sun Jan 17, 2021 8:13 pm 
Offline
Member
Member
User avatar

Joined: Mon May 22, 2017 5:56 am
Posts: 812
Location: Hyperspace
@AndrewAPrice: Lisp machines will never die! :mrgreen: I'm saying that because it was Lisp and Lisp machines which developed and promoted garbage collection years ago. Nothing else had it in quite the same way, I don't think. But... with your calculator example, you're talking about something which can easily be handled with manual memory allocation. When a program exits, free all its memory.

Inferno OS is a single-language OS with a JITed bytecode VM, no MMU, no swap. In its day, it had much better "write once, run anywhere" abilities than Java. It just wasn't promoted so well. It's not a microkernel though.

_________________
Kaph — a modular OS intended to be easy and fun to administer and code for.
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie


Top
 Profile  
 
 Post subject: Re: Designs of microkernels
PostPosted: Sun Jan 17, 2021 10:48 pm 
Offline
Member
Member

Joined: Wed Aug 30, 2017 8:24 am
Posts: 1593
moonchild wrote:
Buffer overflows may be part of the semantics of the source language. So, an arbitrary piece of code written in the source language might contain buffer overflows and there is nothing we can do about that. Example: C.
Buffer overflows in C are, strictly speaking, undefined behavior. An implementation of the language is free to do whatever it wants in case such a thing occurs and still be in spec. In fact, an implementation could have buffer overflow checks and still be conforming. This is what address sanitizer does, and before it, libmudflap.

So it is wrong to say that buffer overflows are part of the semantics of C, since they are not, in fact, in spec.

_________________
Carpe diem!


Top
 Profile  
 
 Post subject: Re: Designs of microkernels
PostPosted: Mon Jan 18, 2021 2:55 am 
Offline
Member
Member

Joined: Wed Apr 01, 2020 4:59 pm
Posts: 73
nullplan wrote:
Buffer overflows in C are, strictly speaking, undefined behavior. An implementation of the language is free to do whatever it wants in case such a thing occurs and still be in spec. In fact, an implementation could have buffer overflow checks and still be conforming. This is what address sanitizer does, and before it, libmudflap.

So it is wrong to say that buffer overflows are part of the semantics of C, since they are not, in fact, in spec.


I mentioned this in my footnote :‎P

moonchild wrote:
This is not technically true. If you access outside the bounds of an array, you get undefined behaviour. Which means it is possible to implement boundschecking for see (see, for instance asan). This doesn't affect the broader point, however; and most code is not compiled with asan.


Top
 Profile  
 
 Post subject: Re: Designs of microkernels
PostPosted: Mon Jan 18, 2021 3:01 am 
Offline
Member
Member

Joined: Wed Apr 01, 2020 4:59 pm
Posts: 73
eekee wrote:
@AndrewAPrice: Lisp machines will never die! :mrgreen: I'm saying that because it was Lisp and Lisp machines which developed and promoted garbage collection years ago. Nothing else had it in quite the same way, I don't think. But... with your calculator example, you're talking about something which can easily be handled with manual memory allocation. When a program exits, free all its memory.


The problem is, what is ‘its’ memory? Which memory ‘belongs’ to a process, when memory can be shared? Determining that isn't simple if you want to allow cheap IPC.

(The answer is, you do some sort of GC. And that's not (very) hard, but you still have to do it.)


Top
 Profile  
 
 Post subject: Re: Designs of microkernels
PostPosted: Mon Jan 18, 2021 8:52 am 
Offline
Member
Member
User avatar

Joined: Thu Oct 13, 2016 4:55 pm
Posts: 1584
moonchild wrote:
I understand that executable memory is a potential attack vector. However I stand by what I said: I don't think that's the most interesting potential attack vector;
Then you're thinking it wrong. Arbitrary code execution is the most serious attack vector, because it could lead to malicious activities. You might crash a computer with any bug, but in order to steal sensible data or spread a virus, you'd need arbitrary code execution. Privilege escalation and sandbox escaping has absolutely no use and are totally harmless without arbitrary code execution.

moonchild wrote:
as long as you have to compile untrusted code, the fundamental possibility for miscompilation remains regardless of the state of your memory protections at a given time.
You still don't get it. The issue is not with the verified JIT compiled code at all. Let me ask you this: can you guarantee that only code compiled from trusted sources will be executed on your machine? Short answer: you can't. That's why it is the job of the kernel to limit access to resources, instead of trusting the user tasks will behave correctly. M$ had a research operating system, Singularity built around the idea of only using verified code and nothing else for security, but it failed and the project died in 2008.

Simply put: it doesn't matter how crafted and secure your JIT implementation is, and that it only generates trusted and verified code if your kernel otherwise allows executing arbitrary data loaded from a file.

moonchild wrote:
Interpreters are easier to secure, yes; but usually a nonstarter for performance reasons.
I disagree. A properly written interpreter can be a lot faster than a badly written JIT. For example the bytecode compiler of BugOS basically generated a series of native function calls, one for each instruction, which runs pretty slow (but worked). On the other hand are you familiar how continuation-passing style threaded code was implemented in the '70s Forth interpreter? Do you know why does the register-based Dalvik run much faster than the stack-based JVM (even though they are both bytecode interpreters)? Have you seen the performance measurements of wasm3 using a continuation-passing style, register-based model?

Plus performance is not everything. Sometimes stability is lot more important. For example if I had to pick a software for a space probe, I'd surely prefer stability over performance, without hesitation and I wouldn't regret that. Little reading, have you ever wondered why was the ISS equipted with 80386 instead of the much-much faster Pentium II (the top-of-the-line processor available at the time in 1998)?

moonchild wrote:
moonchild wrote:
IOW c programs can have buffer overflows, and java programs cannot.
...
I don't think you read what I said. Please reread it carefully, otherwise we aren't going to get anywhere.
Yes, I've read it carefully, you explicitly mentioned "java programs", and not language.

moonchild wrote:
An implementation of the language absolutely can, however. There were bugs in the JVM.
The CVE-2020-2803 wasn't a bug in the JVM, it was a bug in the Bytebuffer built-in java class (which is part of the language btw). It's like having a bug in printf, neither the C code nor the compiler nor the ELF loader nor the ELF interpreter could be held responsible for that.

moonchild wrote:
A formally verified language implementation is proven to have no bugs and thus no buffer overflows.
Ah, okay, now I see where you are mistaken. A formally verified language implementation and proven code does not guarantee no bugs. For that you'd also need a formally verified and proven compiler, libraries and CPU (or VM) too. Have you ever heard this quote
Code:
Beware of bugs in the above code; I have only proved it correct, not tried it. /Donald Knuth/
Proving that a certain code is algorithmically correct or bug-free are two totally different things.

Cheers,
bzt


Top
 Profile  
 
 Post subject: Re: Designs of microkernels
PostPosted: Mon Jan 18, 2021 9:53 pm 
Offline
Member
Member

Joined: Wed Apr 01, 2020 4:59 pm
Posts: 73
bzt wrote:
moonchild wrote:
I understand that executable memory is a potential attack vector. However I stand by what I said: I don't think that's the most interesting potential attack vector;
Then you're thinking it wrong. Arbitrary code execution is the most serious attack vector, because it could lead to malicious activities. You might crash a computer with any bug, but in order to steal sensible data or spread a virus, you'd need arbitrary code execution. Privilege escalation and sandbox escaping has absolutely no use and are totally harmless without arbitrary code execution.


So, I was operating under the assumption that we are allowing untrusted code to run under the JIT. I think that's an interesting usage case and should be supported. You need some form of sandboxing, and VMs are very heavyweight. In an SASOS you don't have any other options, as you can't rely on CPU memory protections.

Which means that all an attacker needs in order to get arbitrary code execution is a codegen bug. Not a buffer overflow (though that will do just as well).


bzt wrote:
moonchild wrote:
Interpreters are easier to secure, yes; but usually a nonstarter for performance reasons.
I disagree. A properly written interpreter can be a lot faster than a badly written JIT.

An APL interpreter will beat most other implementations of most other programming languages. Beyond that, for other PL paradigms: I find that difficult to believe. The overhead of dispatch is massive. Look, for instance, at this series, which walks through building a toy JIT for brainfuck. Look in particular at this benchmark:

Image

The optimized interpreter (bright red) is handily beaten by the most naive JIT (dark green). And this for brainfuck, a language where you need optimizations to turn INC;INC;INC;INC;INC;INC;INC;INC;INC;INC into ADD 10 (for example).

bzt wrote:
On the other hand are you familiar how continuation-passing style threaded code was implemented in the '70s Forth interpreter?


Yes. (Though that technique almost certainly has worse performance than gcc-computed-goto ish strategies, because it has bad locality.)

bzt wrote:
Do you know why does the register-based Dalvik run much faster than the stack-based JVM (even though they are both bytecode interpreters)?


Dalvik and hotspot (which I assume is what you mean by 'the stack-based JVM') are both JIT compilers.

I don't find that link particularly credible. The explanation of stack machines is wrong, and it claims that only register VMs can do CSE (???). Meanwhile wikipedia says dalvik is 2-3x slower than hotspot.

bzt wrote:
Have you seen the performance measurements of wasm3 using a continuation-passing style, register-based model?


I see that it proudly claims to be 4-5x slower than JITs :)

bzt wrote:
Plus performance is not everything. Sometimes stability is lot more important. For example if I had to pick a software for a space probe, I'd surely prefer stability over performance, without hesitation and I wouldn't regret that.


I 100% agree. 100%. Slow interpreters are awesome.

However, you probably want to have the possibility of high-performance, native-efficiency (or close enough to) code on your OS. Which precludes using solely interpreters.

bzt wrote:
Yes, I've read it carefully, you explicitly mentioned "java programs", and not language.

Meaning, programs written in java...

bzt wrote:
moonchild wrote:
An implementation of the language absolutely can, however. There were bugs in the JVM.
The CVE-2020-2803 wasn't a bug in the JVM, it was a bug in the Bytebuffer built-in java class (which is part of the language btw). It's like having a bug in printf, neither the C code nor the compiler nor the ELF loader nor the ELF interpreter could be held responsible for that.

You are taking my words out of context. I addressed this:

moonchild wrote:
some the bugs also had to do with misuse of sun.misc.unsafe, which is not actually part of the language but an implementation detail of the standard library


sun.misc.unsafe is a set of explicitly unsafe libraries. They are not supported and not part of the java language proper.

bzt wrote:
A formally verified language implementation and proven code does not guarantee no bugs. For that you'd also need a formally verified and proven compiler, libraries and CPU (or VM) too


The compiler is the language implementation. The libraries yes, of course you would have to formally verify. But that seems obvious: any libraries you use as part of your implementation are part of your implementation.

The CPU—yes, the CPU can have bugs. Formal verification is used extensively by CPU designers, which is part of the reason that CPUs tend to have much fewer bugs than most other types of software. But the CPU is an interpreter which expresses asm's semantics in terms of physics, and our physical models are imperfect. However, a CPU bug would be likely to affect non-compiler code just as badly as compiler code so I don't think it's very relevant here.


Top
 Profile  
 
 Post subject: Re: Designs of microkernels
PostPosted: Tue Jan 19, 2021 4:26 am 
Offline
Member
Member
User avatar

Joined: Thu Oct 13, 2016 4:55 pm
Posts: 1584
After your last post, sorry, but I must ask (without any offense): are you sure you have the required knowledge?

moonchild wrote:
So, I was operating under the assumption that we are allowing untrusted code to run under the JIT.
And I'm keep telling you forget about JIT attack. I'm talking about opening up an attack vector accidentally by allowing permissions to JIT which is taken advantage somewhere ELSE (unrelated to JIT, outside of JIT, not from JIT generated code).

moonchild wrote:
The optimized interpreter (bright red) is handily beaten by the most naive JIT (dark green).
No wonder. Its allegedly "optimized" interpreter doesn't use any of the optimizations I've mentioned, neither continuation-passing nor the register-based model. It's a very badly written, admittedly naive interpreter, seriously, what did you expect?

moonchild wrote:
I see that it proudly claims to be 4-5x slower than JITs :)
You only see what you want to see. Now see this:
Code:
∼ 8x faster than other known wasm interpreters
Should your example use the same optimizations on "optinterp3", then that "optinterp-cp-rb" would be in par with (or probably even faster than) "optasmjit".
And, just for the records, calling this monstracity "optasmjit" is just ridiculous. Check out one of these or one of these for example, even the worst implementation there is far better. BF is a very well studied topic, and the fact that the author is not familiar with other research is a shame. But if BF performance is your thing, then this is how serious people do it with a hardware-backed interpreter, achieving 50,000,000,000 ips.

moonchild wrote:
I don't find that link particularly credible. The explanation of stack machines is wrong
Or maybe you don't understand the concept and your knowledge on the topic is lacking. I can assure you there's nothing wrong with the explanation, all statements are sound, you definitely should study the topic a bit deeper!

moonchild wrote:
The compiler is the language implementation.
For an Assembler maybe. But even freestanding C has more requirements than the compiler. Not to mention "higher" languges, have ever heard of runtime (like rt0 and RTTI)? Sorry to ask you this again, but are you sure you have the required knowledge?

Cheers,
bzt


Top
 Profile  
 
 Post subject: Re: Designs of microkernels
PostPosted: Tue Jan 19, 2021 5:43 am 
Offline

Joined: Sat Oct 10, 2020 4:05 pm
Posts: 14
I am also a little confused by the explanation of stack-based VMs in that link - particularly by the 4 lines of instructions to execute ADD. It would normally be implemented with a single bytecode instruction that implicitly pops and pushes the values. As an explanation of the algorithm, it is OK, but it does slightly misrepresent what an actual stack-based VM usually looks like.


Top
 Profile  
 
 Post subject: Re: Designs of microkernels
PostPosted: Tue Jan 19, 2021 1:24 pm 
Offline
Member
Member
User avatar

Joined: Thu Oct 13, 2016 4:55 pm
Posts: 1584
Doctor5555 wrote:
I am also a little confused by the explanation of stack-based VMs in that link - particularly by the 4 lines of instructions to execute ADD.
It is pretty straight forward. You store the operands, you use the operator, and you place the result. Maybe the 3. step should be just "ADD", without the arguments noted (which are clearly there to help understanding but rather confused you). The image is absolutely clear though.

Doctor5555 wrote:
It would normally be implemented with a single bytecode instruction that implicitly pops and pushes the values.
Don't confuse algorithm with implementation details and optimization! That 4 steps mean you must theoretically POP the operands and theoretically PUSH the result back. Of course you could optimize those 4 steps in an implementation for a particular CPU, but that's not the point, the point is, you must use the stack for both input and output. Read about Reverse Polish notation, or maybe try to implement and see an infix to postfix converter in action and you'll understand.

Here's a pseudo-Assembly example for you:
Code:
; IN: two operands on stack
; OUT: one result on stack
vm_add_v1:
    cpu_pop eax
    cpu_pop ebx
    cpu_add eax, ebx
    cpu_push eax
    ret

; IN: two operands on stack
; OUT: one result on stack
vm_add_v2:
    cpu_pop eax
    cpu_add eax, [esp]
    ret
Of course the second version didn't actually popped two times and it didn't pushed at all, yet the inputs and outputs are exactly the same for both functions, so from the VM instruction's point of view they are 100% identical. But the second optimization is only possible if the cpu_add supports direct memory addressing, typically only supported on CISC processors. On RISC processors, which tend to have separate LOAD/STORE instructions, you MUST use cpu_push and cpu_pop to implement vm_add.

Doctor5555 wrote:
As an explanation of the algorithm, it is OK, but it does slightly misrepresent what an actual stack-based VM usually looks like.
Can you elaborate? Misrepresent how? Use anything else than a stack and you'll get a register-based machine before you know it. For example see Lua's VM, here's a good reading.

Cheers,
bzt


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 64 posts ]  Go to page Previous  1, 2, 3, 4, 5  Next

All times are UTC - 6 hours


Who is online

Users browsing this forum: No registered users and 26 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group