Hi,
Schol-R-LEA wrote:
Brendan wrote:
I don't just have ranged types, I only have ranged types.
The problem is that this idea carries with it a dangerous set of assumptions, ones which could in fact be subverted. I am not saying it is a bad idea, per se, but rather that there is a subtle trap to it that you need to recognize.
Oh, and @embryo2 should read this too, as what I am about to say regarding static checks applies
at least as much to runtime checks as well.
In absence of other information, I can make one of two assumptions regarding the absolute maximum size of a ranged type in your type system: either the limit is based on the physical limitations of the underlying hardware types, or there is a facility in the compiler to support bignum ranges (ranges exceeding the maximum range magnitude of the relevant system type size, e.g., an integer with a bit width greater than 64 bits on an x86-64 system).
It's the latter.
Schol-R-LEA wrote:
In the latter case, then the compiler would need to generate code for such support, and recognize the need to insert it when the absolute range magnitude exceeds that of the underlying maximum system type. It would be problematic, especially when it is needed in conjunction with heap memory, but entirely possible nonetheless. There is in principle a question of how to handle the case of a range magnitude or range boundaries which cannot be represented in the maximum system type, but in practice this is unlikely to occur (I know, I know, famous last words, but in the case of the range magnitude at least, the size of the maximum addressing space is likely to be less than the system type's range magnitude, and range boundaries often need not be represented at runtime at all). Since Thelema is a Lisp, which traditionally has included bigint, big fixnum, and big flonum at the language level, this is more or less the solution I am going with myself, but it isn't one most language designers would choose.
My types (and variables) have "properties". For example, if you do "typdef myType = range 1 to 1234567890" then you can do "myType.size" to find out how many bytes it will consume (and use "myType.max", "myType.min"). This makes allocating heap space relatively simple (no different to "malloc( sizeof(something) );" in C).
The other problem is temporary storage for intermediate values within expressions, and for (larger than supported) multiplication and division; but for this the compiler can just use stack space.
Note that there was also an upper limit of 256 bits for integers and floating point signification; but this doesn't apply to intermediate values within expressions and you could still do "x = (x * x) % x.max" where "x" is a 256-bit variable, and where the intermediate result ("x * x") would be 512 bits.
Schol-R-LEA wrote:
The former case, placing a maximum limit on the range magnitude, sounds simpler, and fits with common existing practices...
The former case breaks any hope of truly portable code.
Schol-R-LEA wrote:
This is not just a technical limitation; it is fundamental, being a direct consequence of mathematical incompleteness. Any language which allows dynamically allocated memory values is vulnerable to this. It could be done regardless of whether the compiler supports big ranges or not, but providing bigint, big fixnum, and bigfloat support systematically makes it much less likely that someone would do so out of poor design (malice is another matter, but there's only so much one can do at that level about code that intentionally subverts security, anyway). The choice then becomes not one of supporting unranged types or not, but of either explicitly allowing dynamic memory and cutting off a large number of programming possibilities, or acknowledging that you implicitly allowing unranged types while doing everything to discourage them as a best practice (which in practice means providing language-level support for bignums, IMO).
I don't see why you think there's a problem. If I want (e.g.) a 73-bit integer, why should I care if the CPU is an 8-bit CPU that uses ten 8-bit values, or a 16-bit CPU that uses five 16-bit values, or a 128-bit bit CPU that uses one 128-bit value? I can still do "malloc( myType.size )" and "myPointer++;" regardless.
Now what if I do this:
Code:
struct myFreakyBigNum {
u111 digit1
range 0 to 12345 digit2
u77 digit3
}
myFreakyBigNum addFreaky( myFreakyBigNum *v1, myFreakyBigNum *v2) {
myFreakyBigNum *result
u1 oldCarry
u1 carry
result = malloc(myFreakyBigNum.size)
carry = (v1->digit1 + v2->digit1) >> 111
result->digit1 = (v1->digit1 + v2->digit1) % v1->digit1.max
oldCarry = carry
carry = (v1->digit2 + v2->digit2 + oldCarry) & 1
result->digit2 = (v1->digit2 + v2->digit2 + oldCarry) % v1->digit2.max
result->digit3 = v1->digit3 + v2->digit3 + carry
return result;
}
There's still no problem - the compiler checks every assignment and knows everything fits and nothing overflows; except for "
result->digit3 = v1->digit3 + v2->digit3 + carry" where the compiler knows it can overflow and generates an error.
It's not like you can do "x = myLookupTable[myStructure]" or "u9 x = myArrayOfDigits;" to trick the compiler into using your big number (you'd just get a type checking error before the compiler bothers checking ranges).
Cheers,
Brendan