running 32-bit code in LM64

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
Octocontrabass
Member
Member
Posts: 5492
Joined: Mon Mar 25, 2013 7:01 pm

Re: running 32-bit code in LM64

Post by Octocontrabass »

kerravon wrote:I want a simple 64-bit processor.
Then you don't want x86.
kerravon wrote:But I do expect pure non-segmented 16-bit code (ie tiny memory model) code to run in PM32.
I'm pretty sure running 16-bit code in 32-bit mode is impossible. The instruction encoding is too different.
kerravon wrote:MIPS didn't. The transition from S360/67 to z/Arch didn't.
Why would you expect the 64-bit extension to 32-bit x86 to look like those architectures and not like the 32-bit extension to 16-bit x86?
kerravon wrote:Basically this is relying on hardware engineers to create a special mode because the software can't cope with registers and addressing increasing in size. So basically two CPUs in one. Because no-one could figure out how to do it with just one.
Intel already tried that. Have you ever noticed that every 32-bit instruction is valid in 16-bit mode? But no one writes 32-bit programs for 16-bit mode; 32-bit programs written for 32-bit mode are smaller and faster. AMD chose to avoid getting stuck with yet another x86 feature no one uses.
kerravon wrote:Ok, what about reserving just two opcodes? One says "what follows is a new 32-bit instruction" and the other says "what follows is a new 64-bit instruction"? Or, let's say x'40' was chosen as the 32-bit opcode. Two x'40' in a row says 64-bit, 3 says 128-bit etc.
In 16-bit mode, opcode 0x66 is a prefix that means the instruction has a 32-bit operand, and opcode 0x67 is a prefix that means the instruction has a 32-bit address.
kerravon wrote:And the DPMI call is supported even on an 8086, so that's not an issue, right?
No, DPMI requires a 286 at minimum.
kerravon
Member
Member
Posts: 277
Joined: Fri Nov 17, 2006 5:26 am

Re: running 32-bit code in LM64

Post by kerravon »

Octocontrabass wrote:
kerravon wrote:MIPS didn't. The transition from S360/67 to z/Arch didn't.
Why would you expect the 64-bit extension to 32-bit x86 to look like those architectures and not like the 32-bit extension to 16-bit x86?
I (tentatively) expect binary mode upward compatibility to be a fundamental part of CPU design. At least by default. If there is a good reason to break that (seemingly logical thing to do), so be it.

I also expect something like the A20 line for every jump. But it would be an A8, A16, A32 and A64 line. So that you don't need to enable virtual memory to cope with negative indexes. And they are all enabled by default and the OS is responsible for doing a call to disable what they need for further processing. There is a similar issue for floating point. It needs an "fninit" instruction at some point.
kerravon wrote:Basically this is relying on hardware engineers to create a special mode because the software can't cope with registers and addressing increasing in size. So basically two CPUs in one. Because no-one could figure out how to do it with just one.
Intel already tried that. Have you ever noticed that every 32-bit instruction is valid in 16-bit mode? But no one writes 32-bit programs for 16-bit mode; 32-bit programs written for 32-bit mode are smaller and faster. AMD chose to avoid getting stuck with yet another x86 feature no one uses.
I think I didn't word this properly.

I assume you are saying that:

mov eax, ecx

is a valid instruction in 8086, ie:

mov ax, cx

But what I am referring to is that I expect:

mov a,c
mov ax,cx
mov eax,ecx

to all exist, and the original opcodes used to still be valid.
kerravon wrote:Ok, what about reserving just two opcodes? One says "what follows is a new 32-bit instruction" and the other says "what follows is a new 64-bit instruction"? Or, let's say x'40' was chosen as the 32-bit opcode. Two x'40' in a row says 64-bit, 3 says 128-bit etc.
In 16-bit mode, opcode 0x66 is a prefix that means the instruction has a 32-bit operand, and opcode 0x67 is a prefix that means the instruction has a 32-bit address.
Actually in hindsight this might be good enough. Run everything in PM16 - never create PM32 - and simply require those prefixes plastered everywhere. Ditto don't create LM64 - just put a prefix on everything that needs to be 64-bit. In fact, even 16-bit code would need a prefix to say 16-bit, because we start with 8080. And so there is really only PM8. Although it's not really PM8. It's just flat memory like the mainframe. And note that I am ignoring segment registers. Those are special, and I don't expect them to be part of the upward transition of binary compatibility from 8-bit to 64-bit. If you wish to use segment registers you will need a separate compilation.

Although I need it to be "forced 32-bit", not "the other one". Because the argument is going to be that it is faster to strip off all those 32-bit prefixes and just have an "implied 32-bit" (ie PM32). So yes, you can have PM32 after all, but programs that were written to run under either PM16 or PM32 can retain their prefixes and still work. So you cover the people who wish to strip off prefixes and hope that the source code still exists when LM64 is invented, and you also cover people who want to produce a 32-bit binary, in say 1977, even though they can't even run it until 1986 when the 80386 comes out. And it will survive the transition to 64-bit (which is ultimately what I'm after).

Sorry for the delayed response - needed to pause for unrelated reasons.
Octocontrabass
Member
Member
Posts: 5492
Joined: Mon Mar 25, 2013 7:01 pm

Re: running 32-bit code in LM64

Post by Octocontrabass »

kerravon wrote:I (tentatively) expect binary mode upward compatibility to be a fundamental part of CPU design.
It is. Compatibility mode is a fundamental part of the x86 architecture. Any applications written to run in protected mode will run unmodified in compatibility mode.
kerravon wrote:I also expect something like the A20 line for every jump. But it would be an A8, A16, A32 and A64 line. So that you don't need to enable virtual memory to cope with negative indexes. And they are all enabled by default and the OS is responsible for doing a call to disable what they need for further processing.
Again, compatibility mode.
kerravon wrote:There is a similar issue for floating point. It needs an "fninit" instruction at some point.
I don't see how that's a problem.
kerravon wrote:I assume you are saying that:

mov eax, ecx

is a valid instruction in 8086, ie:

mov ax, cx
No, I mean there is at least one sequence of bytes that a 32-bit or 64-bit CPU running in 16-bit mode will interpret as "mov eax, ecx".
kerravon wrote:Actually in hindsight this might be good enough. Run everything in PM16 - never create PM32 - and simply require those prefixes plastered everywhere.
You can basically already do that (although there's no way to extend the stack pointer or instruction pointer to 32 bits in 16-bit mode, so it's not exactly the same). GCC and Clang have the "-m16" option if you want to compile code to run this way.
kerravon wrote:And it will survive the transition to 64-bit (which is ultimately what I'm after).
A modern ARM CPU can emulate an entire i386 PC in Javascript in a web browser. Binary compatibility really isn't important for software written in 1977.
kerravon
Member
Member
Posts: 277
Joined: Fri Nov 17, 2006 5:26 am

Re: running 32-bit code in LM64

Post by kerravon »

Octocontrabass wrote:
kerravon wrote:I (tentatively) expect binary mode upward compatibility to be a fundamental part of CPU design.
It is. Compatibility mode is a fundamental part of the x86 architecture. Any applications written to run in protected mode will run unmodified in compatibility mode.
That's not my expectation. I look upon mode-switching as a fundamental failure. And that fundamental failure made me wake up one day and a perfectly fine tiny-mode 16-bit program that had been working fine for 20 years suddenly started saying "sorry, this executable doesn't work anymore, ask the vendor for a new version" or somesuch. And yes, there was indeed an OS upgrade the day before. XP or something it was called. Can't remember. I assumed that a major company like Microsoft combined with a major company like Intel would be able to organize a tiny mode program to run.

Perhaps if the CM switch was able to be done in user-mode (as is the case with the mainframe AMODEs), that would be ok. Then you could switch to 32-bit mode, run some 32-bit-dense instructions, then switch to 16-bit mode and run some 16-bit dense instructions. And at the beginning of your program this would be the first instruction you execute (or else have a flag in the executable so that the OS knows what mode to enter in - which is what the mainframe does).
kerravon wrote:And it will survive the transition to 64-bit (which is ultimately what I'm after).
A modern ARM CPU can emulate an entire i386 PC in Javascript in a web browser. Binary compatibility really isn't important for software written in 1977.
I consider even considering resorting to emulation to be a fundamental failure too.

Regardless, I have another (20/20 hindsight) proposal.

Even the 8080 instructions should have had a prefix to say "force 8-bit interpretation of the next instruction".

So at the time you write the software, the opcodes are all in place. If you wish to create a special build at that time, with all the prefixes stripped out, such that it will only work on an 8080, not an 8086 or 80386 or x64, that's fine. And when the 8086 comes out you'll hopefully still have the source code, so you can recompile if you want, perhaps using newly-created instructions. Or you can use the other binary, unchanged, with the prefixes still in place - especially if the source code is now lost.

Ditto for any other situation, e.g. a 32-bit-minimum program. 2 versions. One tied to the 80386. The other than can survive LM64 (but still run fine on the 80386).
kerravon
Member
Member
Posts: 277
Joined: Fri Nov 17, 2006 5:26 am

Re: running 32-bit code in LM64

Post by kerravon »

kerravon wrote:Perhaps if the CM switch was able to be done in user-mode (as is the case with the mainframe AMODEs), that would be ok. Then you could switch to 32-bit mode, run some 32-bit-dense instructions, then switch to 16-bit mode and run some 16-bit dense instructions. And at the beginning of your program this would be the first instruction you execute (or else have a flag in the executable so that the OS knows what mode to enter in - which is what the mainframe does).
Combine this (user-mode CM) with overrides for when you have an exceptional instruction? And make it a fundamental part of chip design circa 1950 or 1960.
kerravon
Member
Member
Posts: 277
Joined: Fri Nov 17, 2006 5:26 am

Re: running 32-bit code in LM64

Post by kerravon »

kerravon wrote:
kerravon wrote:Perhaps if the CM switch was able to be done in user-mode (as is the case with the mainframe AMODEs), that would be ok. Then you could switch to 32-bit mode, run some 32-bit-dense instructions, then switch to 16-bit mode and run some 16-bit dense instructions. And at the beginning of your program this would be the first instruction you execute (or else have a flag in the executable so that the OS knows what mode to enter in - which is what the mainframe does).
Combine this (user-mode CM) with overrides for when you have an exceptional instruction? And make it a fundamental part of chip design circa 1950 or 1960.
And maybe it's not too late? An OS (e.g. PDOS) can provide an interrupt to switch modes. No particular reason why it needs to be a CPU instruction (*).

So what's missing is CM8 to get back to the 8080.

And the A8/16/32/64 line disabling is covered by CM.

(*) Except speed - but a future processor can provide a suitable instruction for the interrupt to execute.
Octocontrabass
Member
Member
Posts: 5492
Joined: Mon Mar 25, 2013 7:01 pm

Re: running 32-bit code in LM64

Post by Octocontrabass »

kerravon wrote:I look upon mode-switching as a fundamental failure.
You might be wasting your time with x86.
kerravon wrote:I assumed that a major company like Microsoft combined with a major company like Intel would be able to organize a tiny mode program to run.
They can. The reason they don't is money. Paying engineers to keep ancient software running is more expensive than dropping support and losing you as a customer.
kerravon wrote:or else have a flag in the executable so that the OS knows what mode to enter in
That's how they show you an error message instead of trying to run the program.
kerravon wrote:I consider even considering resorting to emulation to be a fundamental failure too.
Why?
rdos
Member
Member
Posts: 3265
Joined: Wed Oct 01, 2008 1:55 pm

Re: running 32-bit code in LM64

Post by rdos »

kerravon wrote:
Octocontrabass wrote:
kerravon wrote:I (tentatively) expect binary mode upward compatibility to be a fundamental part of CPU design.
It is. Compatibility mode is a fundamental part of the x86 architecture. Any applications written to run in protected mode will run unmodified in compatibility mode.
That's not my expectation. I look upon mode-switching as a fundamental failure. And that fundamental failure made me wake up one day and a perfectly fine tiny-mode 16-bit program that had been working fine for 20 years suddenly started saying "sorry, this executable doesn't work anymore, ask the vendor for a new version" or somesuch. And yes, there was indeed an OS upgrade the day before. XP or something it was called. Can't remember. I assumed that a major company like Microsoft combined with a major company like Intel would be able to organize a tiny mode program to run.
Microsoft never cared for anything else than their own profit, and so it's no wonder that they don't want to make an OS that can run a lot of old software without problems. They want you to buy new software from them. :-)

Linux is not much better either. There are 32-bit and 64-bit versions, and you cannot combine them. Some software can still run in emulators, but that's not very useful.

It actually is perfectly possible to build a multimode OS that can run everything from old MSDOS applications to 64-bit long mode. The trick is to switch processor mode. Either between long mode and protected mode (using V86 mode for MSDOS), or by switching all the way back to real mode. Of course, this requires a kernel that can operate in all these modes, something that certainly is possible. However, this doesn't fit into the old & outdated concepts that Windows and Linux are built around.
Octocontrabass
Member
Member
Posts: 5492
Joined: Mon Mar 25, 2013 7:01 pm

Re: running 32-bit code in LM64

Post by Octocontrabass »

rdos wrote:There are 32-bit and 64-bit versions, and you cannot combine them.
What do you mean? I've had no trouble running 32-bit software on 64-bit Linux.
kerravon
Member
Member
Posts: 277
Joined: Fri Nov 17, 2006 5:26 am

Re: running 32-bit code in LM64

Post by kerravon »

Octocontrabass wrote:
kerravon wrote:I look upon mode-switching as a fundamental failure.
You might be wasting your time with x86.
It's not that simple. I'm interested in what specifically the x86 got wrong, and why, and whether it is possible to recover from that situation, and when. So I need to know a lot of background details.
kerravon wrote:I assumed that a major company like Microsoft combined with a major company like Intel would be able to organize a tiny mode program to run.
They can. The reason they don't is money. Paying engineers to keep ancient software running is more expensive than dropping support and losing you as a customer.
Yeah, and this probably exactly answers the question when people ask me why I wrote PDOS. I don't want to be at the mercy of someone who has very different goals.
kerravon wrote:I consider even considering resorting to emulation to be a fundamental failure too.
Why?
Complexity and overhead that shouldn't be necessary. Or rather - someone can explain to me why it is necessary, without "bigger profits for Intel/Microsoft" as part of the explanation, as that has zero value to me.

Anyway, I have made progress. Which is one of the reasons for the long delay - again.

I have managed to find out some of the internals of AHINCR. It's actually an offset. Addresses get resolved at runtime and this is considered to be the offset portion of a 16:16 address.

With an NE format executable, there is a class of "relocation corrections" that are "offset only", and this is where that is put.

And I have demonstrated (crudely) on PDOS/86 that I can load an NE executable that contains that relocation type.

And I just found out today that MSDOS 4.0 supported loading NE executables too. I think that is RM16 and 640k still.

So. I am thinking of switching to NE executables for PDOS/86 and making it sort of a mini-clone of my understanding of MSDOS 4.0.

There's no particular reason why I need to stick with MZ. Although I can probably make MZ work for my purposes too. I would need to make sure that no segment crossed a 64k boundary (and pad with NULs if necessary), and have the AHINCR offsets all populated suitable for the 8086 (ie 0x1000), and then have an MZ extension to zap all those locations, and AHSHIFT, when the module is loaded on a non-8086 OS (ie PDOS/286 mainly).

That way the MZ executables would still run on standard MSDOS 6.22.

But I don't particularly need that.

And anyway I'm planning on changing the interface away from INT 21H and into the PDOS-generic interface instead (where the app receives the kernel's C library plus additional kernel functions like mkdir()). So I don't really need to maintain compatibility with anything except the hardware.

I don't have a particularly sensible goal anyway. I'm sort of belatedly trying to compete with OS/2 1.0. But keeping the MSDOS API as much as possible. But with a cleaner C interface like OS/2 provides.
Octocontrabass
Member
Member
Posts: 5492
Joined: Mon Mar 25, 2013 7:01 pm

Re: running 32-bit code in LM64

Post by Octocontrabass »

kerravon wrote:And I just found out today that MSDOS 4.0 supported loading NE executables too.
Only multitasking MS-DOS 4.0 could do that, and multitasking MS-DOS is not the MS-DOS you're familiar with.
kerravon
Member
Member
Posts: 277
Joined: Fri Nov 17, 2006 5:26 am

Re: running 32-bit code in LM64

Post by kerravon »

Octocontrabass wrote:
kerravon wrote:And I just found out today that MSDOS 4.0 supported loading NE executables too.
Only multitasking MS-DOS 4.0 could do that, and multitasking MS-DOS is not the MS-DOS you're familiar with.
It's true that I've never used it. But for what reason is that an issue? It presumably still supports all the existing (MSDOS 3.x) API. And now those instructions can be in an NE module instead of MZ. That prior art may be useful.

Although it's largely dependent on having a sensible goal. :-)

Basically my vague goal is to compile 16-bit programs following sensible rules that someone belatedly wrote, even though I've only got access to an XT, and then magically I have access to 16 MiB when either the Turbo 186 or 80286 came out, and magically much more than that when the 80386 comes out, with zero change to my application binary (e.g. micro-emacs - which only allowed me to edit small files on an XT - which I fully expect).

But I don't know whether I should persevere with MZ, including developing changes, or switch to NE where I have existing tool support. I should probably do both, and probably start with NE.

Microsoft C 6.0 is still stuck in Philippines Customs at the moment.
Octocontrabass
Member
Member
Posts: 5492
Joined: Mon Mar 25, 2013 7:01 pm

Re: running 32-bit code in LM64

Post by Octocontrabass »

kerravon wrote:But for what reason is that an issue?
I don't know! It's not the MS-DOS I'm familiar with either.
kerravon wrote:Microsoft C 6.0 is still stuck in Philippines Customs at the moment.
I think downloading a copy would be faster...
kerravon
Member
Member
Posts: 277
Joined: Fri Nov 17, 2006 5:26 am

Re: running 32-bit code in LM64

Post by kerravon »

Octocontrabass wrote:
kerravon wrote:But for what reason is that an issue?
I don't know! It's not the MS-DOS I'm familiar with either.
Ok, so no definite barrier then.
kerravon wrote:Microsoft C 6.0 is still stuck in Philippines Customs at the moment.
I think downloading a copy would be faster...
Given the (at least potentially) sensitive nature of what I'm up to, I don't want to be seen using pirated software. So I have spent something like US$15,000 on various software and hardware associated with PDOS development. I don't do things like overseas holidays. And for most of my life I didn't even have a car. So this is something I consider better use of my money. It's a relatively small price compared to the man-years of effort spent on PDOS.
kerravon
Member
Member
Posts: 277
Joined: Fri Nov 17, 2006 5:26 am

Re: running 32-bit code in LM64

Post by kerravon »

Octocontrabass wrote: Microsoft never added a huge pointer API because they ended up supporting DPMI instead, and DPMI has a function you can call to get AHINCR. A huge pointer function call only makes sense if you're writing 8086 DOS software with no knowledge of the future 286.
I've made a lot of progress (PDPCLIB now supports OS/2 1.x with Microsoft C 6.0 and Watcom, and I have a lot more plans), but now I'm back to this. Here is the DPMI function to get AHINCR:

https://www.ctyme.com/intr/rb-5805.htm

I didn't see one to get AHSHIFT.

As you noted, 8086 doesn't support DPMI, but that's fine - I'm happy for this function call to fail and then I use a hardcoded default (1000H).

However, for the Turbo 186 (segment is shifted 8 bits instead of 4), I need this (or some other call), to return a different value.

The thing is - I am expecting to stay in RM16, not switch to PM16.

So what I need is for this DPMI function to return an AHINCR that is appropriate to RM16, not a future switch to PM16.

But is that the totally wrong thing for DPMI to be used for? It doesn't give you values for the current (RM16) situation, it only ever gives values for a future shift to PM16?

PDOS/86 or PDOS-generic-16 or whatever will basically just have its own DPMI function (or a new INT 21H function - whatever is appropriate) to let the app know where it *currently* stands.

If there is prior art (ie DPMI) to explain the *current* situation, I'm happy to reproduce that (small) functionality in PDOS-16-whatever.

But if that's not what it is does, it is better for me to use a different interrupt that is under my control (albeit non-standard).

Any suggestions?

Thanks. Paul.
Post Reply