Gotta love how the consensus these days seems to be "because it allows you to resize integers" (though on big endian you can achieve the same by apply an offset to the address as needed)…
eekee wrote:
There's another argument for little-endian, but it doesn't apply to many OSs: In arbitrary-precision calculations, adding or subtracting numbers needs to start with the low digits for the carry to propagate correctly. (Yes, even though you borrow in the other direction when doing subtraction longhand. 2s complement is awesome like that.) I don't know about multiplication and division.
…but I believe this is the correct answer, at least originally. Don't forget that the difference between little and big endian dates back to the earliest CPUs, and especially was critical with the 8-bit ones (e.g. on 6502
anything larger than 8-bit was in the realm of arbitrary precision), and even for CPUs that could work on larger data it gave the possibility to start performing the calculation before the whole number was fetched (though I'm not sure how common that actually was).
Not so important nowadays, but it stuck and x86 was originally designed back when this kind of stuff was still relevant too. There really isn't much of a difference other than big endian being seemingly easier to read (and only because of how we lay out bytes in writing, shouldn't they be really right to left and bottom to top? we do say that higher memory addresses are at the "top" of memory after all…)