nullplan wrote:
Different standards for different people, I suppose. The datasheet quite explicitly show only twenty address lines, so whatever would you expect to happen on access to FFFF:0010?
That's the thing though, the "best" answer is "I don't know". You could test and find out it wraps, and this is evidently what some people did, but it is literally undefined behaviour in the purest sense of the term. You wouldn't use undefined behaviour from the C standard, right? Even if your compiler does something neat when you do, your code isn't correct C any more. Programming to a standard is IMO a better way to go than programming to a specific implementation unless you personally build the platform.
nullplan wrote:
That is a workable definition, but it only works as long as you can imagine a difference between an environment and its implementation. So, less abstractly, it only works if you have some idea of what a PC is, that is independent of the IBM 5150. People didn't have that when they wrote programs for the 8086.
This however is a reasonable point, at the time no one could have reasonably expected the PC to become what it is now.
I suppose attaching blame to any party is in hindsight a pretty useless thing to do. Everything they did is perfectly reasonable given what they knew at the time. Maybe though it would have been better in the long term for IBM to pull the band-aid off rather than overloading the keyboard controller? Either way, not really reasonable to blame them for something that wasn't really an issue till like 10 years later.
nullplan wrote:
It isn't defined anywhere. I only have it on word of mouth from the manufacturer, but it isn't actually written down anywhere. It has, however, remained constant for now seven hardware revisions. As for the application being defective: Well, I'm only working around deficiencies in the OS (in this case, the lack of a monotonic timer). So I have a "rock and a hard place" situation: I can either depend on something that is unlikely to change before the code needs an overhaul, anyway (because new hardware), or I can use the realtime clock, and hope to god no-one changes the clock while the timeout is running. What would you do?
I did say, sometimes it is unavoidable to use implementation specific behaviour. It should be a last resort though, and even then I would ask first does this functionality justify binding the application to a specific hardware implementation. I mean consider this, the manufacture finds a cheaper way to make the boards using a different oscillator, they could easily just start doing this without telling everyone and just internally make it revision 1.1. MoBo manufacturers do this pretty often.
nullplan wrote:
That highly depends on what you consider an environment to be. Just because it might theoretically run my code, doesn't mean it is my environment. Or else we'd have to write all our floppy disk boot sectors such that they are runnable on x86, Amiga, PowerPC... Or at the least we'd have to detect 8086, 286, 386, and 486 before we could use CPUID. No-one does that.
See in this case I think it is rather clear cut. Before all those other platforms faded to obscurity floppies said on them "IBM format" "Amiga format" "PC-DOS" etc. If you say your OS requires a PC compatible, yes you should check the model before using CPUID. If you say it requires a 486 or better though for example then you are coding to that standard, and the capabilities of the 8086 are irrelevant.