That is essentially what the function I posted does. The part that converts a 'single digit' to a 'single character' is done by
[tt]xlat_table[[y]][/tt]
As was pointed out by CodeSlasher, this can also be done by an offset from the ASCII value of '0' (48, or 0x030), which would be,
[tt]y + 0x30[/tt] The table lookup version is more general (it will work with other character encoding formats than ASCII with only some minor changes), but the addition version is faster and smaller.
These approaches can be used just like that, but they only work correctly if you know ahead of time that the value comes out to a one decimal digit (that is to say, it is a value between 0 and 9, inclusive).
The problem is, binary (base 2) values don't exactly come even to the decimal (base 10) values. Three bits will exactly represent values up to 8, which is too few; while four bits represents values up to 16, which is too many. If you try to convert a four bit value to decimal, if the value is 0x0A or higher then it would be equal to two decimal digits, not one. This obviously won't work with the conversion schemes above.
The algorithm given earlier is designed to isolate the low digits of the number one at a time, so that they can be converted correctly. Since it produces the low values first, you then need to reorder the result so that the last digit found is the first one in the string. In languages with string libraries that handle concatenation and automatically resizing, this most easily done by recursing through the value so that the result of each pass is built from the value of the pass that follows it; otherwise, you have to do some array juggling like I did in the iterative version.
Urk. I hope that that explantion won't need another explanation, but somehow I think it might... perhaps I'll need to find a clearer way to describe this.
|