OSDev.org

The Place to Start for Operating System Developers
It is currently Tue Apr 16, 2024 1:48 pm

All times are UTC - 6 hours




Post new topic Reply to topic  [ 140 posts ]  Go to page Previous  1 ... 6, 7, 8, 9, 10
Author Message
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Thu Jan 21, 2016 8:23 pm 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1925
Location: Athens, GA, USA
I realize that it has been several months now, and that this is unequivocal thread necromancy, but I was re-reading this and finally realized what it was about the following statement that was bothering me:

Brendan wrote:
I don't just have ranged types, I only have ranged types.


The problem is that this idea carries with it a dangerous set of assumptions, ones which could in fact be subverted. I am not saying it is a bad idea, per se, but rather that there is a subtle trap to it that you need to recognize.

Oh, and @embryo2 should read this too, as what I am about to say regarding static checks applies at least as much to runtime checks as well.

In absence of other information, I can make one of two assumptions regarding the absolute maximum size of a ranged type in your type system: either the limit is based on the physical limitations of the underlying hardware types, or there is a facility in the compiler to support bignum ranges (ranges exceeding the maximum range magnitude of the relevant system type size, e.g., an integer with a bit width greater than 64 bits on an x86-64 system).

In the latter case, then the compiler would need to generate code for such support, and recognize the need to insert it when the absolute range magnitude exceeds that of the underlying maximum system type. It would be problematic, especially when it is needed in conjunction with heap memory, but entirely possible nonetheless. There is in principle a question of how to handle the case of a range magnitude or range boundaries which cannot be represented in the maximum system type, but in practice this is unlikely to occur (I know, I know, famous last words, but in the case of the range magnitude at least, the size of the maximum addressing space is likely to be less than the system type's range magnitude, and range boundaries often need not be represented at runtime at all). Since Thelema is a Lisp, which traditionally has included bigint, big fixnum, and big flonum at the language level, this is more or less the solution I am going with myself, but it isn't one most language designers would choose.

The former case, placing a maximum limit on the range magnitude, sounds simpler, and fits with common existing practices... but it can be subverted fairly easily by a careless or unscrupulous coder, simply by changing the representation to one based on a dynamically resizable array or list type, and implementing (however clumsily) the bignum operations themselves. Simply by chaining the values together, one can kludge together a numeric type that bypasses the range checks entirely.

This is not just a technical limitation; it is fundamental, being a direct consequence of mathematical incompleteness. Any language which allows dynamically allocated memory values is vulnerable to this. It could be done regardless of whether the compiler supports big ranges or not, but providing bigint, big fixnum, and bigfloat support systematically makes it much less likely that someone would do so out of poor design (malice is another matter, but there's only so much one can do at that level about code that intentionally subverts security, anyway). The choice then becomes not one of supporting unranged types or not, but of either explicitly allowing dynamic memory and cutting off a large number of programming possibilities, or acknowledging that you implicitly allowing unranged types while doing everything to discourage them as a best practice (which in practice means providing language-level support for bignums, IMO).

This hardly is limited to numeric types, either, but most of the other places such subversion could occur would involve providing range support for non-numeric types (e.g., enums and sets), or would involve intrinsically unrangeable types (more or less any dynamically allocated collection type).

Now, as I said, I am not arguing against your respective solutions; since there are no unequivocal solutions, the solutions have to be judged on a case by case basis, and my own approach is as flawed as any other. I merely am bringing up something you were probably aware of, but might not have specifically considered in this context.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Fri Jan 22, 2016 3:02 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Schol-R-LEA wrote:
Brendan wrote:
I don't just have ranged types, I only have ranged types.


The problem is that this idea carries with it a dangerous set of assumptions, ones which could in fact be subverted. I am not saying it is a bad idea, per se, but rather that there is a subtle trap to it that you need to recognize.

Oh, and @embryo2 should read this too, as what I am about to say regarding static checks applies at least as much to runtime checks as well.

In absence of other information, I can make one of two assumptions regarding the absolute maximum size of a ranged type in your type system: either the limit is based on the physical limitations of the underlying hardware types, or there is a facility in the compiler to support bignum ranges (ranges exceeding the maximum range magnitude of the relevant system type size, e.g., an integer with a bit width greater than 64 bits on an x86-64 system).


It's the latter.

Schol-R-LEA wrote:
In the latter case, then the compiler would need to generate code for such support, and recognize the need to insert it when the absolute range magnitude exceeds that of the underlying maximum system type. It would be problematic, especially when it is needed in conjunction with heap memory, but entirely possible nonetheless. There is in principle a question of how to handle the case of a range magnitude or range boundaries which cannot be represented in the maximum system type, but in practice this is unlikely to occur (I know, I know, famous last words, but in the case of the range magnitude at least, the size of the maximum addressing space is likely to be less than the system type's range magnitude, and range boundaries often need not be represented at runtime at all). Since Thelema is a Lisp, which traditionally has included bigint, big fixnum, and big flonum at the language level, this is more or less the solution I am going with myself, but it isn't one most language designers would choose.


My types (and variables) have "properties". For example, if you do "typdef myType = range 1 to 1234567890" then you can do "myType.size" to find out how many bytes it will consume (and use "myType.max", "myType.min"). This makes allocating heap space relatively simple (no different to "malloc( sizeof(something) );" in C).

The other problem is temporary storage for intermediate values within expressions, and for (larger than supported) multiplication and division; but for this the compiler can just use stack space.

Note that there was also an upper limit of 256 bits for integers and floating point signification; but this doesn't apply to intermediate values within expressions and you could still do "x = (x * x) % x.max" where "x" is a 256-bit variable, and where the intermediate result ("x * x") would be 512 bits.

Schol-R-LEA wrote:
The former case, placing a maximum limit on the range magnitude, sounds simpler, and fits with common existing practices...


The former case breaks any hope of truly portable code.

Schol-R-LEA wrote:
This is not just a technical limitation; it is fundamental, being a direct consequence of mathematical incompleteness. Any language which allows dynamically allocated memory values is vulnerable to this. It could be done regardless of whether the compiler supports big ranges or not, but providing bigint, big fixnum, and bigfloat support systematically makes it much less likely that someone would do so out of poor design (malice is another matter, but there's only so much one can do at that level about code that intentionally subverts security, anyway). The choice then becomes not one of supporting unranged types or not, but of either explicitly allowing dynamic memory and cutting off a large number of programming possibilities, or acknowledging that you implicitly allowing unranged types while doing everything to discourage them as a best practice (which in practice means providing language-level support for bignums, IMO).


I don't see why you think there's a problem. If I want (e.g.) a 73-bit integer, why should I care if the CPU is an 8-bit CPU that uses ten 8-bit values, or a 16-bit CPU that uses five 16-bit values, or a 128-bit bit CPU that uses one 128-bit value? I can still do "malloc( myType.size )" and "myPointer++;" regardless.

Now what if I do this:
Code:
struct myFreakyBigNum {
    u111 digit1
    range 0 to 12345 digit2
    u77 digit3
}

myFreakyBigNum addFreaky( myFreakyBigNum *v1, myFreakyBigNum *v2) {
    myFreakyBigNum *result
    u1 oldCarry
    u1 carry
   
    result = malloc(myFreakyBigNum.size)

    carry = (v1->digit1 + v2->digit1) >> 111
    result->digit1 = (v1->digit1 + v2->digit1) % v1->digit1.max
    oldCarry = carry
    carry = (v1->digit2 + v2->digit2 + oldCarry) & 1
    result->digit2 = (v1->digit2 + v2->digit2 + oldCarry) % v1->digit2.max
    result->digit3 = v1->digit3 + v2->digit3 + carry

    return result;
}


There's still no problem - the compiler checks every assignment and knows everything fits and nothing overflows; except for "result->digit3 = v1->digit3 + v2->digit3 + carry" where the compiler knows it can overflow and generates an error.

It's not like you can do "x = myLookupTable[myStructure]" or "u9 x = myArrayOfDigits;" to trick the compiler into using your big number (you'd just get a type checking error before the compiler bothers checking ranges).


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Fri Jan 22, 2016 8:40 am 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1925
Location: Athens, GA, USA
Brendan wrote:
Schol-R-LEA wrote:
The former case, placing a maximum limit on the range magnitude, sounds simpler, and fits with common existing practices...


The former case breaks any hope of truly portable code.


Quite so. This is a good enough reason to take the approach of supporting bignums on the language level on its own.

Brendan wrote:
Schol-R-LEA wrote:
This is not just a technical limitation; it is fundamental, being a direct consequence of mathematical incompleteness. Any language which allows dynamically allocated memory values is vulnerable to this. It could be done regardless of whether the compiler supports big ranges or not, but providing bigint, big fixnum, and bigfloat support systematically makes it much less likely that someone would do so out of poor design (malice is another matter, but there's only so much one can do at that level about code that intentionally subverts security, anyway). The choice then becomes not one of supporting unranged types or not, but of either explicitly allowing dynamic memory and cutting off a large number of programming possibilities, or acknowledging that you implicitly allowing unranged types while doing everything to discourage them as a best practice (which in practice means providing language-level support for bignums, IMO).


I don't see why you think there's a problem. If I want (e.g.) a 73-bit integer, why should I care if the CPU is an 8-bit CPU that uses ten 8-bit values, or a 16-bit CPU that uses five 16-bit values, or a 128-bit bit CPU that uses one 128-bit value?


In itself, it isn't a problem at all, especially if you are providing the support for it directly in the language. The potential problem comes when you have developers writing their own, potentially buggy, inefficient, and insecure versions of bignums that are outside of the numeric type hierarchy. If a client-programmer creates a 'type' that consists of a collection of structs (possibly entirely implicitly, without wrapping it in an actual type declaration), the compiler (in most cases) won't see how it relates to the other numeric types, so the whole system of ranged type protections go out the window. In the absence of language support, the client-programmers would be forced to do this if they need very large numeric values - and you can bet that there would be not one but several such implementations, each incompatible with the others, and each with their own bugs and secure flaws.

Again, I'm just mentioning the potential issue, not because I think that this is a particularly serious flaw, but because it applies to any kind of type security measures that don't outright limit the language's expressive power. I know you are already aware of this, so it is just a reminder for you; however, I suspect that some others may not understand this, so I am also using this as a teachable moment for OS-Devvers as a whole (are you reading this now, Embryo?).

Similar tradeoffs regarding what to support, and how, come up all the time. To quote myself from a different forum:

Schol-R-LEA wrote:
I remember having a conversation with Paul Graham about why not including an OOP framework in Arc was a bad idea; what I told him (which I'm sure he'd heard from many, many others) was that the problem was that any Lisp worth its salt would make it trivial to implement an OO framework, and instead of banning OOP from his brainchild, he'd end up with a few thousand incompatible OO frameworks clogging up the library space. This isn't speculation: nearly every Scheme implementation has its own special-snowflake OO framework, and none of them work alike or interoperate to any sane degree. Scheme can get away with that, because it is a teaching language and meant to show you exactly that sort of thing; Arc was intended to be industrial-strength, and that sort of thing would be disastrous for it.


This is more or less the same issue here; the real problem comes as much from having multiple, competing implementations of a basic language facility as anything else. You can't prevent someone from writing their own implementation of such a facility, but you can forestall the need for it in many cases. The tradeoff comes from the need to keep the implementations of the language and libraries manageably small, while still giving enough support to limit the cases where the client-programmers have to reinvent the wheel in incompatible and risky ways.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Fri Jan 22, 2016 8:13 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Schol-R-LEA wrote:
Brendan wrote:
I don't see why you think there's a problem. If I want (e.g.) a 73-bit integer, why should I care if the CPU is an 8-bit CPU that uses ten 8-bit values, or a 16-bit CPU that uses five 16-bit values, or a 128-bit bit CPU that uses one 128-bit value?


In itself, it isn't a problem at all, especially if you are providing the support for it directly in the language. The potential problem comes when you have developers writing their own, potentially buggy, inefficient, and insecure versions of bignums that are outside of the numeric type hierarchy. If a client-programmer creates a 'type' that consists of a collection of structs (possibly entirely implicitly, without wrapping it in an actual type declaration), the compiler (in most cases) won't see how it relates to the other numeric types, so the whole system of ranged type protections go out the window.


In this case (programmer constructed their own numerical types) the compiler won't see how it relates to the other numeric types; however the compiler has no reason to care how it relates to the other numeric types because the "programmer constructed" numerical types can't be directly used for anything anyway; and the compiler's range checking can still work because it checks that each piece is within the range for that piece.

To understand this; imagine the compiler only supports/understands 8-bit variables; where you're combining 4 of these variables and using them as a "meta-number". You want to store the number 0x1234567890 so you use some code that ends up doing something like this:
Code:
    byte1 = 0x1234567890 & 0xFF
    byte2 = (0x1234567890 >> 8) & 0xFF
    byte3 = (0x1234567890 >> 16) & 0xFF
    byte4 = 0x1234567890 >> 24

In this case the compiler has no idea that the full "meta-number" is 0x1234567890; but it complains that the result of "0x1234567890 >> 24" doesn't fit in "byte4"; and still prevents you from storing 0x1234567890 in your "meta-number".

If you implement your own operators (addition, subtraction, multiplication, etc) for your "group of four 8-bit variables" system; then the compiler will still prevent overflows. The main difference is the error messages - e.g. you might get the error message "something doesn't fit in byte4" (or maybe "index=4 is out of range for array of 4 bytes") and won't get an error message like "result of add/sub/mul/div doesn't fit in the meta-number".

Alternatively; you could write code that does this:
Code:
    byte1 = 0x1234567890 & 0xFF
    byte2 = (0x1234567890 >> 8) & 0xFF
    byte3 = (0x1234567890 >> 16) & 0xFF
    byte4 = (0x1234567890 >> 24) & 0xFF

Now the compiler can't check; but you also can't assume that programmer didn't intentionally want "wrapping". It's the same as "uint32_t myNumber = 0x1234567890;" (very likely a bug) and "uint32_t myNumber = 0x1234567890 & 0xFFFFFFFF;" (very likely intentional).

Schol-R-LEA wrote:
In the absence of language support, the client-programmers would be forced to do this if they need very large numeric values - and you can bet that there would be not one but several such implementations, each incompatible with the others, and each with their own bugs and secure flaws.


Yes. I was planning support for up to 256-bit integers, and "256-bit significand with 256-bit exponent" floating point; and I believe this is more than enough for all typical code. The only likely case where you'd need something larger (that I can think of) is cryptography, but for that there's often other restrictions (e.g. ensuring it takes the same amount of time regardless of values used) and even if the compiler supported large enough (e.g. 2048-bit) integers you'd still want to implement your own.

Note that I am only talking about "fixed size" numbers (and that's all the compiler will support). The OS will also have a "maths service" that does arbitrary precision rational numbers; partly because the overhead of messaging is negligible when you start working with huge numbers, partly because I imagine the maths service will be used for a lot of things including the compiler itself; and partly because for really huge numbers (e.g. multiple GiB numbers) I'd want the ability to use a different computer (or at least a different virtual address space).


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Os and implementing good gpu drivers(nvidia)
PostPosted: Tue Jan 26, 2016 5:43 am 
Offline
Member
Member

Joined: Wed Jun 03, 2015 5:03 am
Posts: 397
A bit of thoughts has come to my mind about ranged variables. I see the benefit of it, but Brendan's stance (to impose very hard restrictions) is too unpleasant for me. I prefer to have only those restrictions that I want, so, may be it would be better for the compiler just to highlight the assignments where the overflow is possible and let the developer decide if the overflow is dangerous or not. It frees the developer from being too microscopic when reading the code. But also it leaves a place for a smart compiler to insert run time range checks. The latter is absolutely unacceptable for people like Brendan, but the truth is - most developers won't use even the overflow highlight feature (and just uninstall an IDE throwing errors for every overflow). It means there should be the options. The options can vary from (0 restrictions) silently inserting run time checks to (hard restrictions) show error on every overflow. And in between there should be an option of highlighting overflows and letting a developer to mark them as safe or to change the code (if the developer really see the danger there after compiler's hint).

And about the range limit (128, 256, whatever bits). I think there should be no limit. But the hardware supported types just must be visible. So, there should be a set of "primitive" types (hardware supported) and a number of extensible types with unlimited bit count (like BigInteger and BigDecimal in Java).

Also, as another range checking option, there can be built in types with required overflow check at compile time. For such types there should be no option to disable compiler's errors on overflow detection. It's just another piece of flexibility for a language with options vary from silently insert a run time check (for beginners and busy coders) to the most demanding option (for Brendan and the likes) of preventing compilation to succeed for every overflow.

@Schol-R-LEA

Honestly, I just haven't got the point you are asking me to discuss. May be I've glanced the discussion too quickly :( But may be the overflow thoughts can help a bit.

_________________
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 140 posts ]  Go to page Previous  1 ... 6, 7, 8, 9, 10

All times are UTC - 6 hours


Who is online

Users browsing this forum: No registered users and 52 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group