OSDev.org

The Place to Start for Operating System Developers
It is currently Tue Apr 16, 2024 1:34 am

All times are UTC - 6 hours




Post new topic Reply to topic  [ 175 posts ]  Go to page 1, 2, 3, 4, 5 ... 12  Next
Author Message
 Post subject: What do you think about managed code and OSes written in it?
PostPosted: Tue Jan 20, 2015 3:46 am 
Offline
Member
Member
User avatar

Joined: Thu Mar 27, 2014 3:57 am
Posts: 568
Location: Moscow, Russia
I see some cons and pros of it:
1) Possibly slow performance because of opcode interpretation, but can JIT and absence of context switching (in some architectures) compensate it.
2) Security and stability.
3) In architectures without contexts (where VM is just a layer between hardware and software with its own protection system) stablity can be a problem - one (virtual, i.e. executed/interpreted by the VM) process can crash the entire system, if a bug is present.

Also, what would make performance faster? Small instruction set (faster interpretation) or complex (more operations for one opcode, respectively better performance with some specific actions, especially, if the VM's purpose is specific)?


I'm working on a general purpose operating system, targeted at 64-bit architectures. Currently, I'm implementing the toolchain: the object file tool (partly done), the assembler, the linker and the compiler.

_________________
"If you don't fail at least 90 percent of the time, you're not aiming high enough."
- Alan Kay


Top
 Profile  
 
 Post subject: Re: What do you think about managed code and OSes written in
PostPosted: Tue Jan 20, 2015 4:23 am 
Offline
Member
Member
User avatar

Joined: Sun Sep 19, 2010 10:05 pm
Posts: 1074
Pros:
  • Hardware is abstracted away behind simplified interfaces
  • Applications are simpler to design and code, and are generally smaller and more stable
  • Applications can run on multiple platforms (without having to be pre-compiled on each platform)
  • Applications have limited access to hardware and system resources, which limits malware potential

Cons:
  • Applications normally can not take advantage of platform specific hardware
  • Applications are limited to features supported by the VM
  • VM design must anticipate various use cases for virtually any future application

_________________
Project: OZone
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott


Top
 Profile  
 
 Post subject: Re: What do you think about managed code and OSes written in
PostPosted: Tue Jan 20, 2015 4:38 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Roman wrote:
I see some cons and pros of it:
1) Possibly slow performance because of opcode interpretation, but can JIT and absence of context switching (in some architectures) compensate it.
2) Security and stability.
3) In architectures without contexts (where VM is just a layer between hardware and software with its own protection system) stablity can be a problem - one (virtual, i.e. executed/interpreted by the VM) process can crash the entire system, if a bug is present.


Essentially, for normal OSs the hardware provides security/isolation between process, and for managed OSs this hardware support is ignored and security/isolation is provided by software (e.g. compiler and run-time) instead. So...

Roman wrote:
Also, what would make performance faster?


In general, using the existing "hardware accelerated" security/isolation (instead of slower security/isolation done in software) would make it faster.

Also note that software based security/isolation can't defend against any lower level problems, including compiler bugs, RAM faults, hardware based injection techniques, etc. What this mostly means is that hardware provides superior security/isolation.

Roman wrote:
Small instruction set (faster interpretation) or complex (more operations for one opcode, respectively better performance with some specific actions, especially, if the VM's purpose is specific)?


This is a mostly unrelated issue. In general (for modern systems) cache and memory access times tend to dominate performance, and this includes memory accesses caused by instruction fetch. For a smaller set of simple instructions ("RISC") you need a larger number of instructions to get anything done, which means more memory accesses caused by instruction fetch, which means performance is worse.

However; if you're forced to have a layer of bloat in the middle (the VM) then the overhead of that layer of bloat can be more significant than "virtual instruction fetch" costs; so a simpler instruction set with worse instruction fetch but less virtualisation overhead may be significantly better than a complex instruction set with better instruction fetch and higher virtualisation overhead.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: What do you think about managed code and OSes written in
PostPosted: Tue Jan 20, 2015 6:29 am 
Offline
Member
Member
User avatar

Joined: Mon Jun 16, 2014 5:59 am
Posts: 543
Location: Shahpur, Layyah, Pakistan
Why not both native code and managed code?. One for usual applications and other for performance critical applications such as 3d games.


Top
 Profile  
 
 Post subject: Re: What do you think about managed code and OSes written in
PostPosted: Wed Jan 21, 2015 6:25 am 
Offline
Member
Member

Joined: Tue May 13, 2014 3:02 am
Posts: 280
Location: Private, UK
muazzam wrote:
Why not both native code and managed code?. One for usual applications and other for performance critical applications such as 3d games.


Sure, but that's just a conventional, "native" OS. Most managed VMs don't require any special kernel support (except possibly some co-operation from the memory manager to ensure that JIT'd code is executable).

_________________
Image


Top
 Profile  
 
 Post subject: Re: What do you think about managed code and OSes written in
PostPosted: Wed Jan 21, 2015 6:54 am 
Roman wrote:
3) In architectures without contexts (where VM is just a layer between hardware and software with its own protection system) stablity can be a problem

If you have decide to mix a managed code with "architectures without contexts" then it's up to you to ensure the stability of a system. It seems for me as a case of over-complication.
Roman wrote:
Also, what would make performance faster? Small instruction set (faster interpretation) or complex (more operations for one opcode, respectively better performance with some specific actions, especially, if the VM's purpose is specific)?

If you mean a hardware instruction set, then there are many ways of optimizing hardware for a particular task and not just the two, that are mentioned in your question. You need to make a choice on a much wider basis.


Top
  
 
 Post subject: Re: What do you think about managed code and OSes written in
PostPosted: Wed Jan 21, 2015 7:18 am 
Brendan wrote:
In general, using the existing "hardware accelerated" security/isolation (instead of slower security/isolation done in software) would make it faster.

What kind of isolation do you mean? If it's about array bounds and null pointer checks, then why we should attribute it to isolation only? It's software reliability area, which is much wider than isolation only. And if we move such "isolation" to it's home area (reliability), then we just have no isolation overhead at all, instead of having it with unmanaged solutions (even if hardware accelerated). And the reliability overhead can be viewed as a human performance enabler, instead of just silicon clock counting. Then we get really visible performance gain in a form of more and better software that enables us to make more things in the same time. For example - the Internet infrastructure is very dependent on Java based solutions (server side, of course) and it is a consequence of the human performance gain in the area of software development. Next gain is the Internet user's performance, when they have more versatile environment for their tasks execution. In total - people just much happier with managed language as an Internet background. It is a performance case, if we define some metrics for "happiness".
Brendan wrote:
Also note that software based security/isolation can't defend against any lower level problems, including compiler bugs, RAM faults, hardware based injection techniques, etc. What this mostly means is that hardware provides superior security/isolation.

But why should we mix isolation with memory fault tolerance? For memory faults we can have an interrupt, that is supported by a hardware, while having no problem with isolation at all. Such approach is also suitable for other low level problems. In short - we should fight a particular problem instead of baking a "too general" solutions.
Brendan wrote:
However; if you're forced to have a layer of bloat in the middle (the VM) then the overhead of that layer of bloat can be more significant than "virtual instruction fetch" costs

If a VM compiles a bytecode to a native representation (JIT), then we have no more overhead than a pure native solution has.


Top
  
 
 Post subject: Re: What do you think about managed code and OSes written in
PostPosted: Thu Jan 22, 2015 8:04 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

embryo wrote:
Brendan wrote:
In general, using the existing "hardware accelerated" security/isolation (instead of slower security/isolation done in software) would make it faster.

What kind of isolation do you mean?


I mean isolating processes from each other (so one process can't access another process' code or data), and isolating the kernel from processes (so no process can access the kernel's code or data).

embryo wrote:
Brendan wrote:
Also note that software based security/isolation can't defend against any lower level problems, including compiler bugs, RAM faults, hardware based injection techniques, etc. What this mostly means is that hardware provides superior security/isolation.

But why should we mix isolation with memory fault tolerance? For memory faults we can have an interrupt, that is supported by a hardware, while having no problem with isolation at all. Such approach is also suitable for other low level problems. In short - we should fight a particular problem instead of baking a "too general" solutions.


In theory we could maybe have interrupts telling the OS about all sorts of hardware faults. In practice most of the necessary hardware either doesn't exist, doesn't help or is too expensive; and software developers like us have no way to force hardware to magically appear out of nowhere.

embryo wrote:
Brendan wrote:
However; if you're forced to have a layer of bloat in the middle (the VM) then the overhead of that layer of bloat can be more significant than "virtual instruction fetch" costs

If a VM compiles a bytecode to a native representation (JIT), then we have no more overhead than a pure native solution has.


If you ignore the overhead of JIT, then there's no overhead of JIT. Also; my car is a submarine (as long as I ignore all the water that gets inside it when I try to drive under the ocean)!


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: What do you think about managed code and OSes written in
PostPosted: Fri Jan 23, 2015 8:11 am 
Brendan wrote:
embryo wrote:
Brendan wrote:
In general, using the existing "hardware accelerated" security/isolation (instead of slower security/isolation done in software) would make it faster.

What kind of isolation do you mean?


I mean isolating processes from each other (so one process can't access another process' code or data), and isolating the kernel from processes (so no process can access the kernel's code or data).

The "hardware accelerated" isolation here just crashes an application, while software solution makes it impossible to crash an application (and ensures the isolation, of course). It's a new quality. And this quality makes faster such things as development and popularity increase speed.

While technically it is quicker to crash an application instead of ensuring isolation without a crash, our personal demands are not satisfied with such solution. Usually our demands are a bit wider than squeezing some microseconds from crash time. More often our goal is a reliable software. And if we consider the actual goal, then the single important isolation related performance is the performance of achieving the reliability. And here managed languages are far above their unmanaged counterparts.

So, it is possible to rephrase your words:

In general, using the existing "hardware accelerated" security/isolation (instead of technically slower security/isolation done in software) would make it slower if we consider the actual goal of the security/isolation.
Brendan wrote:
In theory we could maybe have interrupts telling the OS about all sorts of hardware faults. In practice most of the necessary hardware either doesn't exist, doesn't help or is too expensive; and software developers like us have no way to force hardware to magically appear out of nowhere.

Even if we unable to detect a fault in a manageable manner, we are also unable to manage it with the help of hardware isolation. We just crash an application (as a best possible fault outcome) and pray for the fault to be an once in a decade accident that spontaneously occurs and disappears right after the application crash. But if the "once in a decade accident" has happened while some system code was in play or if the accident is not as seldom and self healing as we expect, then we just have no isolation at all. So, in context of manageable code vs hardware delivered isolation we have a one to billions probability of a bit better outcome in case of hardware protection. But does it have any sense to count on one to billions level probability? And trade such happy gain for the benefits a managed solution can deliver.
Brendan wrote:
If you ignore the overhead of JIT, then there's no overhead of JIT. Also; my car is a submarine (as long as I ignore all the water that gets inside it when I try to drive under the ocean)!

Yes, the JIT approach has an overhead. But if it is an OS, that provides a managed solution, then it is perfectly possible to JIT an application during it's installation and forget about any overhead (except the installation time only). Here again we see a very small gain for unmanaged solution which is in no way can be compared with the benefits of a managed solution.


Top
  
 
 Post subject: Re: What do you think about managed code and OSes written in
PostPosted: Fri Jan 23, 2015 8:22 am 
Offline
Member
Member
User avatar

Joined: Wed Jan 06, 2010 7:07 pm
Posts: 792
embryo wrote:
The "hardware accelerated" isolation here just crashes an application, while software solution makes it impossible to crash an application (and ensures the isolation, of course).

False. Hardware isolation only crashes the application when the kernel does that on purpose. You can always do something like handle SIGSEGV and do something else.

Further, software isolation can't detect all possible problems at compile time (unless you use a highly advanced type system in a non-turing complete language like Agda or Coq), so there are still runtime failures that have to be handled somehow. And even beyond that there can be hardware failures that software isolation won't check against.

_________________
[www.abubalay.com]


Top
 Profile  
 
 Post subject: Re: What do you think about managed code and OSes written in
PostPosted: Fri Jan 23, 2015 11:01 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

embryo wrote:
Brendan wrote:
embryo wrote:
What kind of isolation do you mean?


I mean isolating processes from each other (so one process can't access another process' code or data), and isolating the kernel from processes (so no process can access the kernel's code or data).

The "hardware accelerated" isolation here just crashes an application, while software solution makes it impossible to crash an application (and ensures the isolation, of course). It's a new quality. And this quality makes faster such things as development and popularity increase speed.

While technically it is quicker to crash an application instead of ensuring isolation without a crash, our personal demands are not satisfied with such solution. Usually our demands are a bit wider than squeezing some microseconds from crash time. More often our goal is a reliable software. And if we consider the actual goal, then the single important isolation related performance is the performance of achieving the reliability. And here managed languages are far above their unmanaged counterparts.

So, it is possible to rephrase your words:

In general, using the existing "hardware accelerated" security/isolation (instead of technically slower security/isolation done in software) would make it slower if we consider the actual goal of the security/isolation.


You're conflating 2 very different things: security/isolation (e.g. protecting against potentially intentional/deliberate unauthorised access), and correctness (e.g. protecting against accidental programmer mistakes). The first is something OSs need to care about, and effects the design of everything (memory management, file permissions, disk quotas, user login, whatever). It's an OS issue, not a language issue.

For protecting against programmer mistakes (bugs); it ranges all the way from "trivial to detect at compile time" (e.g. syntax errors) all the way to "impossible to detect regardless of how much bloat you add for the sake of incompetent script kiddies" (e.g. "printf("Your name is %d!\n", user_age_in_years);"). Ideally, you want to be able to detect as many problems as possible before the software gets anywhere near the end user; but it's impossible to detect all of them and penalising correct software with pointless/unavoidable overhead won't help. In any case; it's a language issue, not an OS issue.

embryo wrote:
Brendan wrote:
In theory we could maybe have interrupts telling the OS about all sorts of hardware faults. In practice most of the necessary hardware either doesn't exist, doesn't help or is too expensive; and software developers like us have no way to force hardware to magically appear out of nowhere.

Even if we unable to detect a fault in a manageable manner, we are also unable to manage it with the help of hardware isolation. We just crash an application (as a best possible fault outcome) and pray for the fault to be an once in a decade accident that spontaneously occurs and disappears right after the application crash. But if the "once in a decade accident" has happened while some system code was in play or if the accident is not as seldom and self healing as we expect, then we just have no isolation at all. So, in context of manageable code vs hardware delivered isolation we have a one to billions probability of a bit better outcome in case of hardware protection. But does it have any sense to count on one to billions level probability? And trade such happy gain for the benefits a managed solution can deliver.


There are no benefits that a managed solution can deliver, and it would be quite foolish to trade anything for the disadvantages that managed solutions cause.

embryo wrote:
Brendan wrote:
If you ignore the overhead of JIT, then there's no overhead of JIT. Also; my car is a submarine (as long as I ignore all the water that gets inside it when I try to drive under the ocean)!

Yes, the JIT approach has an overhead. But if it is an OS, that provides a managed solution, then it is perfectly possible to JIT an application during it's installation and forget about any overhead (except the installation time only). Here again we see a very small gain for unmanaged solution which is in no way can be compared with the benefits of a managed solution.


It's impossible to JIT an application during its installation, because as soon as you do that it's "AOT" (Ahead Of Time compilation) and not JIT at all.

Note that the problem with managed code is that there are problems that are impossible to detect at compile time that force you to do run-time checking, and that run-time checking adds overhead. It doesn't matter if you compile to native before the end user gets it, or compile to native when the end user installs it, or use JIT; in all cases the run-time checking can't be avoided and adds overhead during execution; and in almost all cases that run-time checking is either unnecessary or ineffective or both.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: What do you think about managed code and OSes written in
PostPosted: Fri Jan 23, 2015 10:09 pm 
Offline

Joined: Wed Jan 21, 2015 4:19 pm
Posts: 6
I would just like to point out here that at the end of the line, native code is generated.

Managed OS's can take advantage of all the features Hardware provides which current OS's use.

Managed OS's can also introduce new features which aren't provided by current OS's.

Quote:
Essentially, for normal OSs the hardware provides security/isolation between process, and for managed OSs this hardware support is ignored and security/isolation is provided by software (e.g. compiler and run-time) instead.

^^^ That's just pure nonsense. A properly designed Managed OS will use hardware security/isolation and sprinkle it's own software extensions on top.

Quote:
Pros:
Hardware is abstracted away behind simplified interfaces
Applications are simpler to design and code, and are generally smaller and more stable
Applications can run on multiple platforms (without having to be pre-compiled on each platform)
Applications have limited access to hardware and system resources, which limits malware potential

Cons:
Applications normally can not take advantage of platform specific hardware
Applications are limited to features supported by the VM
VM design must anticipate various use cases for virtually any future application

This sounds more realistic, except that depending on the design of the OS, the Cons listed can be overcome to an extent.

The first Con can be overcome by the Hardware abstraction as listed in the Pros list.
The second Con doesn't really exist. While there are limitations of the VM, these limitations can be reduced to being unnoticeable with proper abstraction using interfaces.
The third Con is valid, but is almost non-existent with modern VM's since they are very flexible and barebones meaning that use cases are dependent on exposed interfaces, not the VM itself, and interfaces can be easily added by the OS.

_________________
MOSA Project


Top
 Profile  
 
 Post subject: Re: What do you think about managed code and OSes written in
PostPosted: Sat Jan 24, 2015 4:59 am 
Rusky wrote:
embryo wrote:
The "hardware accelerated" isolation here just crashes an application, while software solution makes it impossible to crash an application (and ensures the isolation, of course).

False. Hardware isolation only crashes the application when the kernel does that on purpose. You can always do something like handle SIGSEGV and do something else.

And what is this "something else"?
Rusky wrote:
Further, software isolation can't detect all possible problems at compile time (unless you use a highly advanced type system in a non-turing complete language like Agda or Coq), so there are still runtime failures that have to be handled somehow.

But have you mentioned that while software isolation can't detect all possible problems it still CAN detect most frequent ones? And you can compare it with no detection at all in a hardware case (at least in manageable form).
Rusky wrote:
And even beyond that there can be hardware failures that software isolation won't check against.

Of course, we can hurt ourself even in our home, but you can compare such manageable situation to something like war or another serious disaster (i.e. unmanageable situation). And I suppose you will prefer something more manageable.


Top
  
 
 Post subject: Re: What do you think about managed code and OSes written in
PostPosted: Sat Jan 24, 2015 5:51 am 
Brendan wrote:
You're conflating 2 very different things: security/isolation (e.g. protecting against potentially intentional/deliberate unauthorised access), and correctness (e.g. protecting against accidental programmer mistakes). The first is something OSs need to care about

We are talking about a managed OS. Such OS is supposed to use (very extensively) some managed code. Then we have a system with many of it's characteristics defined by the managed code. Such OS can avoid some problems just because of managed code nature, then it's developer can write less code and achieve a greater development speed. It again leads to the managed code influence, but now in form of more features an OS can have. All it means there is very tight bunch of things under the one name - managed OS.

And in particular - improved code correctness leads to improved security/isolation an OS can have. So, can we disjoin such things?
Brendan wrote:
Ideally, you want to be able to detect as many problems as possible before the software gets anywhere near the end user; but it's impossible to detect all of them and penalising correct software with pointless/unavoidable overhead won't help.

The "pointless/unavoidable overhead" can be reduced in case of managed code. But unmanaged code just insists on one wrong thing - it supposes that human can make it better. And it's just not true for many problems. The trivial example here is how much errors every compiler catches in a human provided code. So, all such problems just must be handled to the computer. And a managed solution (and a managed OS as it's paramount) shows us a really efficient way of freeing a developer from those boring small problems. Some extra overhead here will pay for it with a lot more stable, secure and reliable solutions. The Java server side success just demonstrates it in a very obvious form.

Here again we should return to the definition of performance. In case of managed code (managed OS) the performance is about human's achievements while in case of unmanaged code the performance is about human's involvement in some boring details. And while being able to cope with boring details can be viewed as an achievement, but the time spent on such "achievement" leaves no possibility for the achiever to extend his achievements much wider.
Brendan wrote:
embryo wrote:
in context of manageable code vs hardware delivered isolation we have a one to billions probability of a bit better outcome in case of hardware protection. But does it have any sense to count on one to billions level probability?


There are no benefits that a managed solution can deliver, and it would be quite foolish to trade anything for the disadvantages that managed solutions cause.

Trading an extra small probability of a better isolation for all those efforts saved by the managed solutions looks like trading a sand grain for the whole universe.
Brendan wrote:
It's impossible to JIT an application during its installation, because as soon as you do that it's "AOT" (Ahead Of Time compilation) and not JIT at all.

Ok, let's use your term, let it be AOT. But what the name changes in the discussed subject?
Brendan wrote:
Note that the problem with managed code is that there are problems that are impossible to detect at compile time that force you to do run-time checking, and that run-time checking adds overhead.

Yes, there is an overhead. But it's impact is being decreased all the time. And managed solution advantages just make such impact practically invisible.

So, I see it is your devotion to the unmanaged code low level capabilities that prevents you from looking at managed solutions without some animosity. And then I want to point out a simple fact - a managed solution is not an enemy and if a developer provides it with some suitable hints it will produce a much better code. And such hints can be elaborated up to the same level the unmanaged code can provide in case of low level features. But unlike the unmanaged code a managed solution just do not require me to think about such low level detail every time and it just frees my time, that is a very nice outcome.


Top
  
 
 Post subject: Re: What do you think about managed code and OSes written in
PostPosted: Sat Jan 24, 2015 12:20 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

embryo wrote:
Brendan wrote:
You're conflating 2 very different things: security/isolation (e.g. protecting against potentially intentional/deliberate unauthorised access), and correctness (e.g. protecting against accidental programmer mistakes). The first is something OSs need to care about

We are talking about a managed OS. Such OS is supposed to use (very extensively) some managed code. Then we have a system with many of it's characteristics defined by the managed code. Such OS can avoid some problems just because of managed code nature, then it's developer can write less code and achieve a greater development speed. It again leads to the managed code influence, but now in form of more features an OS can have. All it means there is very tight bunch of things under the one name - managed OS.


Let's write a C compiler. Let's write a C compiler that inserts run-time checks everywhere a pointer is used (and sends a SIGSEGV if it detects the pointer wasn't valid). Let's write a C compiler that generates code to track things like array sizes and insert checks to detect "array index out of bounds". Let's write a C compiler that inserts additional "is divisor zero?" checks before every division (and sends SIGFPE).

When software crashes, do you think end users will be glad that the problem was detected by software and not hardware? Do you think developers will be able to write code in C faster with the new C compiler? Do you think the additional overhead would be worthwhile?

What if we used a managed language instead of C; and instead of software crashing because of SIGSEGV it crashed because of a "reference to object was null" exception. Would end users be glad that the software crashed in that case? Would developers be able to write code faster because the error message is different? Do you think the additional overhead would be worthwhile now?

The problem is that we're detecting problems at run-time. It's impossible; but what if we actually were able to guarantee that there are no problems left to detect at run-time (e.g. by guaranteeing that all possible problems will be detected during "ahead of time" compiling)? In that case (at least in theory) developers would be able to develop software faster and there wouldn't be any run-time overhead either; however there also wouldn't be any difference between "managed" and "unmanaged".

embryo wrote:
And in particular - improved code correctness leads to improved security/isolation an OS can have. So, can we disjoin such things?


No. You can used a managed language for both security/isolation and correctness, but security/isolation and correctness are still 2 different things.

embryo wrote:
Brendan wrote:
Ideally, you want to be able to detect as many problems as possible before the software gets anywhere near the end user; but it's impossible to detect all of them and penalising correct software with pointless/unavoidable overhead won't help.

The "pointless/unavoidable overhead" can be reduced in case of managed code. But unmanaged code just insists on one wrong thing - it supposes that human can make it better. And it's just not true for many problems. The trivial example here is how much errors every compiler catches in a human provided code. So, all such problems just must be handled to the computer. And a managed solution (and a managed OS as it's paramount) shows us a really efficient way of freeing a developer from those boring small problems. Some extra overhead here will pay for it with a lot more stable, secure and reliable solutions. The Java server side success just demonstrates it in a very obvious form.


Unmanaged code is "optimistic" - it assumes the code is correct (regardless of whether it is or not). Managed code is "pessimistic" - it assumes code is not correct and then penalises performance (regardless of whether code is correct or not).

embryo wrote:
Here again we should return to the definition of performance. In case of managed code (managed OS) the performance is about human's achievements while in case of unmanaged code the performance is about human's involvement in some boring details. And while being able to cope with boring details can be viewed as an achievement, but the time spent on such "achievement" leaves no possibility for the achiever to extend his achievements much wider.


This is pure nonsense. If you write some code and compile it with 2 different compilers (one that produces native/unmanaged code and another that produces managed code); then the amount of work it took to write the code is identical regardless of which compiler you use and regardless of whether the code ends up as managed or unmanaged.

What you're trying to say is that some languages make detecting bugs easier (regardless of whether the compiler produces managed or unmanaged code) and this can effect "programmer productivity" (but has nothing to do with managed vs. unmanaged).

embryo wrote:
Brendan wrote:
embryo wrote:
in context of manageable code vs hardware delivered isolation we have a one to billions probability of a bit better outcome in case of hardware protection. But does it have any sense to count on one to billions level probability?


There are no benefits that a managed solution can deliver, and it would be quite foolish to trade anything for the disadvantages that managed solutions cause.

Trading an extra small probability of a better isolation for all those efforts saved by the managed solutions looks like trading a sand grain for the whole universe.


Do you execute all C/C++ code inside a "managed" virtual machine environment like valgrind where software is used to detect run-time problems and not hardware?


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 175 posts ]  Go to page 1, 2, 3, 4, 5 ... 12  Next

All times are UTC - 6 hours


Who is online

Users browsing this forum: Bing [Bot] and 17 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group