FelixBoop wrote:
Given all this, do you think it would be worth developing a modern, 16 bit OS? Obviously, it wouldn't be a super computer, but I think that a system that could use modern file formats, and maybe even do basic networking, could be quite useful, not to mention cheap.
If you are talking about stock PC hardware, then this part is incorrect, for a reason most people wouldn't consider: volume. Up until around 2008, each successive generation of PCs sold roughly twice as many units as the previous generation. Furthermore, as time has passed, most of the older hardware has been destroyed, either having been disposed as obsolete, or from wear and tear, power surges and short circuits, accidental damage, and so forth. The practical result of this is that it is far easier to find hardware from after, say, 2005, than it would be to find hardware from before 2000. Since 32-bit 80386 and 80486 systems were solidly established as the mainstream by around 1992, this means that an XT-class system is going to be
more valuable (slightly) than any systems from between 1995 and 2005 (though given that those are generally are going at scrap value today, it hardly matters). You would be hard pressed to find an actual 8088 system today, or even an 80286 system, so there's more or less no gain in terms of availability in writing for them.
Outside of the PC world is a somewhat different story; 16-bit and even 8-bit microcontrollers are still fairly common, though even those are quickly being phased out in favor of ARM and other 32-bit RISC designs, which are both easier to code for and often use less power due to more modern designs. For custom hardware, 16-bit is at least reasonable, though even there a 32-bit processor is likely to make more sense. Unless you intend to build a custom CPU from TTL, I would not recommend using 16-bit hardware.
This leaves the question of whether it is worthwhile from a software perspective. The answer, simply put, is not just no but
HELL NO!, especially if the target is an x86 PC system. Simply put, the original x86 architecture and instruction set stinks on ice, and while a lot of the flaws were corrected (or at least compensated for) by the 32-bit extensions, it is still a truly wretched design that even Intel wants to see the back of - it only persists because no one wants to have to re-write every single piece of Windows software people want to keep using. Most of the other microprocessors prior to the 68K were pretty bad, but none were quite as bad as the one that ended up on top; since most processors designed after the 68K were 32-bit RISCs (the 68K was a good but baroque CISC design in the vein of the VAX minicomputer), and were designed to optimize compiled code, they are all much cleaner, less ugly, and ironically enough, easier to write assembly code for (though writing
efficient assembly code for them by hand is still a damnable pain in the rear).
[
EDIT: I just added my
Historical Notes on CISC and RISC essay to the wiki, for anyone who is interested.]
Even if this weren't the case, the benefits of having a linear address space of 4GiB or more (with memory protection and demand paging to manage it) are more than reason enough to target a 32-bit or later system.
So, if you mean to have non-stock hardware, from a coding perspective, you would in most cases be better off with an ARM- or MIPS-based SBC like the Raspberry Pi, the Beagleboard, or the MIPS Creator, if you can. Keep in mind, however, that used stock hardware will be much cheaper (and can often be gotten for free) for most desktop purposes.