Brynet-Inc wrote:
You my friend, are the living incarnation of evil...
Thank you.
Quote:
Instead of eliminating superior languages for limited inferior ones, how about we eliminate inferior programmers?
Sounds like a plan
Normally I wouldn't respond to your trolls, but I'd just like to point out that you clearly have no idea how much productivity is lost due to stupid mistakes caused by even decent programmers, never mind inferior ones. The memory management issues that C and C++ force developers to address are a pain in the butt for most userland programming tasks. I tell you this from my 10 years of experience developing commercial software, mostly in C++.
Where exactly do your pointless comments come from, other than an inferiority complex?
Candy wrote:
While I agree that concurrency and isolation need to be solved at the programming language level, I disagree on the context you bring with it.
You may be reading more into that context than I intended to say.
Quote:
There is no need for the programming language to be managed or anything such; the only thing the programming language should and must do is make it easier to use the abstractions.
Drop the word "managed" and replace it with "type-safe", and remember that making it easier to use abstractions involves making it more difficult to use them incorrectly, and I totally agree with you.
Quote:
They should be made composable as you've just stated before. I see the future of language hiding the complexity of isolation more in encapsulation within language constructs (chosen constructs) rather than hiding it in language modifications.
This is a good philosophy when it comes to language design, but this issue is completely orthogonal to what I was talking about. For example, C# is a type-safe language that has language modifications for new abstractions (e.g. -- LINQ), while Scala is a type-safe language that is generic enough to add powerful new features via libraries alone (e.g. -- Erlang-style Actors). In many ways,
Scala reminds me of C++ -- not in terms of syntax or the C part, but in terms of the philosophy behind it (multi-paradigm, extensible syntax, powerful libraries). You should check it out.
Quote:
The operating system must act as an arbiter for all code, a final barrier in system protection.
Yep, that's what I meant. In a system that uses software isolation, the OS guarantees that isolation by verifying code for type-safety before running it (usually when it's installed, so it doesn't need to be done every time you launch a process).
Quote:
Make it easy to do it right and people will. The managed approach is the antithesis of the C++ base foundation - make stuff quick and as safe as that allows.
In the next few years, I think hardware will progress to the point where I doubt that a C++ programmer could easily write code that outperforms a sufficiently clever optimizing compiler for a higher-level language. The main reason? Multi-core. In the coming years, the performance of code is going to be defined mostly by its degree of parallelism, scalability, and locality of reference (important for NUMA systems, which I think will become increasingly more prevalent). We're not programming simple Von-Neumann machines any more. So I don't buy this "speed first" argument in favour of C and C++. Secondly, with all the evil malware out there today, I disagree with speed as a priority over safety in the first place.
Quote:
The managed approach is based more on making it safe and then quick. Safety is a good point but it has its merits and I think pointless checks as they are being inserted automatically into managed code are not the way to security.
Are you saying that things like array bounds checking don't achieve security, or that you don't like the sacrifice of performance? If it's the first, I agree that it isn't a full solution, but such type-safe code can at least prevent stupid mistakes like buffer overruns, and if there are run-time checks required to ensure that, I think it's worth it. If it's the second, a good optimizing compiler can actually eliminate a lot of run-time checks from the generated code. In Singularity for example, dynamic loading is not permitted, so all the code that is going to run in a process is available at the time it is verified, compiled to native code, and optimized. This means that the IL-to-native compiler can apply many whole-program optimizations that are much more far reaching than what a typical modern compiler can achieve (be it C/C++, Java, C#, or whatever). Also, there is a lot of research going on into
dependent types, which allow the compiler to prove that certain pre-conditions are met at compile time, and thus drop many run-time checks from the generated code.
I think you're imagining a lot of run-time overhead that doesn't need to be there. With a good enough type system and compiler, I think a higher-level language can outperform C/C++ on nearly all tasks.
Crazed123 wrote:
I just think managed-code operating systems (like Oberon from the Development board) are dismal, evil, horrible ideas because they force software developers to retarget their language toolchains to whatever VM architecture the system uses, often adding signficant overhead.
I think you're imagining the same phantoms that Candy is. Take Singularity, for example, The only extra run-time overhead (besides GC, which I just don't have time to get into in this post) are some extra array bounds checks and the like, many of which are optimized out before the code runs anyway. There is no "VM" per se, just a type-safe language, compiler, and verifier.
Quote:
After all, a C implementation of "ls" doesn't really need the features provided by say... the .NET framework or CIL VM. Why should it have to target them?
Depends what you mean by "target them". If you wanted to ensure that your "ls" wasn't actually "ls_secretly_hijack_your_computer", you might want to compile it to an intermediate language that could be verified easily, translated to native code, and optimized at install-time. There is way less overhead involved in this than in today's JIT-compiling CLR or JVM.
If by "target them" you mean "use base class libraries", ls should use whatever it needs, and no more. In Singularity for example (sorry if I sound like a broken record, but it is the system of this type that I'm most familiar with), all system libraries are linked statically at install-time, and then the IL-to-native compiler does "tree-shaking" -- basically it eliminates all dead code that is never used while creating the final native executable. In the end, you only pay for what you use, which is the point of C/C++, isn't it?
Quote:
Furthermore, why should programs need to be written in the same language to communicate well or be composable (as these "One Managed Language or VM" proposals seem to go for)? I :heart: Bash script exactly because it doesn't require that.
Now here you've hit the big problem I see with this type of system -- choice. I believe in choice, especially when it comes to languages. I also believe that it's possible to go too far in the direction of designing languages with exotic type systems that can be compiled into very efficient code, but are very difficult for the average programmer to understand (Scala tilts a bit too far in this direction IMO). If someone wants to program in a dynamic language like Ruby on such an OS, they shouldn't be unduly punished with crappy performance. I'm not sure what the right trade-off is here.
However, I still believe that this is a problem for language designers to solve, not OS designers. I think this type of OS architecture is the future, one way or another.