OSDev.org

The Place to Start for Operating System Developers
It is currently Tue Apr 23, 2024 9:44 am

All times are UTC - 6 hours




Post new topic Reply to topic  [ 21 posts ]  Go to page Previous  1, 2
Author Message
 Post subject: Re: I want to know stories about 80286
PostPosted: Tue May 02, 2017 7:19 am 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1925
Location: Athens, GA, USA
tom9876543 wrote:
Intel was way behind thanks to its obsession with the failing IAPX432.


In a sense, yes, but you have to remember that Intel wasn't aiming for - and didn't even want - the home computer market. The were expecting to have to compete with the LispMs, the Xerox Star, and the MicroVAX for the high-end workstation market - and all of them had extremely heavyweight CPU designs, with the MicroVAX being the least elaborate design among those (which is saying a lot, considering that the VAX ISA was one of the most complex in any production mainframe). They - and nearly everyone else - though that both tagged architecture and capability addressing would be absolutely essential technologies for workstations by 1985, and the idea that something like the 68K would be powerful enough for a workstation system seemed ludicrous.

Then things started to change drastically. The 432 ran into problems with the complex memory subsystem - the assumption that hardware-based capabilities would somehow magically become more efficient in a silicon implementation than it was in a TTL implementation proved wishful thinking - and the complexity of the overall design was proving to be too much. They found that it took several times more transistors than expected, meaning at then-current transistor densities, trying to fit the whole thing onto one chip meant that the physical size of the chip was impractically large, causing the failure rate to skyrocket.

Meanwhile, LMI and Symbolics were both so enamored of their own design skills that they were focusing on constantly improving the designs rather than developing a practical production line (this was a problem with several companies at the time, with Foonly being another stark example of this). They were essentially selling hand-wrapped prototypes built in TTL logic as finished products, and made few moves to either formalize the designs for an assembly line, or re-implement them as single-chip CPUs (which probably wouldn't have worked for the same reasons the 432 didn't), though TI tried to later on.

The Xerox Star (and the Xerox Dorado, their attempt to enter the LispM market) was also an expensive TTL system that was never really re-designed for mass production. In any case, the upper management were never really enthusiastic for the PARC ideas - they correctly saw it as a threat to the company's core business, and assumed that if they buried it, no one else would rediscover the ideas, not aware of the amount of press coverage it had already gotten. Soon afterwards, Xerox made a deal with Apple to let them use a version of the PARC GUI, with much the same assumption that home computers were a dead end anyway that Intel had - they figured it would shut those damn hippies at in Palo Alto up by giving them a practical lesson in what would ans wouldn't work, because the Lisa was obviously underpowered as a workstation and you'd have to be crazy to think people really wanted this 'Macintosh' thing in their homes...

Then, just as it looked like all these things might work out after all, a bombshell hit: the results of the Berkeley RISC and Stanford MIPS projects were published. No one saw this coming, and there was a huge fight in both academia and the industry over what this really meant.

And in the background, where no one was watching, companies like SUN (originally a specialty company formed to support the Stanford University Network for the CS department) and Apollo started releasing workstations in the $10K to $30K range - well below the price of the LispMs and the Star - that were just barely powerful enough to run a version of System V Unix. They lacked all the fancy design and optimizations of the existing workstations, but the fact that they were cut in a single silicon chip rather than built from TTL meant that the difference in performance wasn't as huge as conventional wisdom suggested, and the gap was rapidly closing. Suddenly, everyone who had been buying the high end workstations decided that half a loaf today was better than a whole loaf tomorrow, and flocked to these machines which could do most of what they wanted for a lower price.

Intel was caught at a crossroads in the design of computers, and bet on the wrong path - or at least, a path that wasn't feasible yet. It is possible that capability addressing and tagged memory could be made to work now, given Moore's Law and other improvements in die processes, but there hasn't been much interest in it until recently.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
 Post subject: Re: I want to know stories about 80286
PostPosted: Tue May 02, 2017 3:01 pm 
Offline
Member
Member

Joined: Wed Jul 18, 2007 5:51 am
Posts: 170
Thank you Schol-R-LEA for the interesting history of what happened.

Intel failed to predict the IBM PC would be a huge success and had the wrong strategy.
This caused Intel to develop the brain dead 286 processor.

Intel was blindly assuming mainframe like CPUs would be where they could make the most money.
If Intel was smart they would have predicted the growth of the $10k workstation market (SUN etc), and designed a CPU for Unix Workstations (with 32bit flat addressing and paged memory).


Top
 Profile  
 
 Post subject: Re: I want to know stories about 80286
PostPosted: Wed May 03, 2017 12:14 pm 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1925
Location: Athens, GA, USA
tom9876543 wrote:
If Intel was smart they would have predicted the growth of the $10k workstation market (SUN etc), and designed a CPU for Unix Workstations (with 32bit flat addressing and paged memory).


They did: the i860. It flopped badly, because by the time it came out in 1989, the market was already saturated with RISC processors for that niche, and the situations which led to RISC being advantageous were fading in importance. Interestingly, the earlier i960, which was also a RISC but aimed at embedded systems, was considerably more successful, but the designs of the two were radically different (no one seems to know what happened with the naming there, but presumably the 860 project was started first and then was delayed past the roll-out of the 960).

The RISC vs CISC issue is often badly misunderstood. The main reasons RISC is often seen as an improvement over CISC are

  • Eliminating of rarely-used instructions (which were originally intended to make assembly programming easier, but add complications to compiler optimization) reduces the design cost, saves silicon real estate (meaning it can use fewer transistors, or use the same number of transistors to add things like caches), and reduces or eliminates the need for microcoding in the instruction decoding and execution.
  • Using only simple instructions (a separate issue from the one above) makes for faster instruction throughput (since everything fits into one cycle without having to make a cycle excessively long), reduces energy consumption, and reduces design and production overhead.
  • Fixed-size instruction formats mean that the instruction decoder always knows exactly how large the instruction it needs to pre-fetch is (at the cost of less efficient memory usage for common instructions, hence the later retrofitting of the 16-bit Thumb instruction on to the ARM).
  • Elimination of multiple addressing modes for data operations means that each instruction always accesses data in the same way, with most instructions only operating on registers (with value offsets - often used for indexing - being handled via constant fields in the instructions themselves).
  • Very large register files, with all or almost all registers being general-purpose, simplifies compiler implementation and optimization. The instruction pointer in a RISC is usually special-purpose, though exceptions exist even for that. Things like the stack and frame pointers are usually set by convention and assembly-language naming rather than hardwired.
  • The use of register-based conventions (or register windows) for handling procedure call returns, and the passing of procedure arguments and return values, allows procedures to only use the in-memory stack as needed, rather than requiring it in any procedure call. This makes optimizing the calls easier for a compiler writer.

In addition, many CISCy designs - including the 8086 - were just poorer designs, period. They often had cripplingly small register files (or even were accumulator-based systems, like the 6502, with only a single general register that was the implicit argument of most instructions), were often filled with oddball exceptions and special cases, and had various kludges in how the memory was handled (not just segmentation; for example, the 'Page Zero' memory and fixed 'Page 1' stack on the 6502). This was by no means universal, however; the VAX and the Motorola 68000, two designs which epitomized CISC, had very cleanly designed ISAs with consistent addressing formats and large (for the time) register sets.

At first, RISCs also had the advantage that even a heavily optimized design could fit onto a single-chip CPU, whereas many optimizations used in CISC systems - such as out-of-order execution and the use of large, multi-level associative memory caches - wouldn't, given the transistor densities of the 1980s and early to mid 1990s. While the other advantages were independent of the CPU implementation, this was an issue with any CPU designed then, and made RISC a very appealing option in that period.

However, as Moore's Law ground on, the advantages in silicon real estate melted away, as existing CISC optimizations were mated to ones specific to IC designs. Modern superscalar CISC designs get most of the same advantages as RISCs, but do so by throwing hardware and electricity at the problem (by the use of complex pipelining, instruction re-ordering, various forms of instruction merging and splitting, and especially, caching).

Finally, a single chip design will always have an advantage over a multi-chip or TTL one simply due to the smaller lightspeed delay. However, early on this had to be balanced against the ability of larger systems to parallelize their instruction pipelines. Also, a multi-chip system using dozens or even hundreds of smaller, special-purpose CPUs such as DSPs - or even general-purpose ones, such as in the Connection Machine and the Hypercube - could take this even further, as seen in pretty much ever supercomputer today, which might combine a thousand or more Xeon processors to crunch a single computation.

The single-chip vs. TTL issue ceased to be a factor in the early 1990s as it became possible to put increasingly complex architectures onto a single chip - today, no one except retrocomputing hobbyists build individual CPUs out of multiple chips.

Today, RISC vs CISC is less about raw computing power (though RISC advocates, including myself, often argue that it has the potential for it if the same amount of design effort were applied to it) than over the costs of new chip development and improvements to the designs (Intel pours billions into each new chip generation), retail costs per unit, and the drastic differences in energy consumption (DOoOX in particular is immensely energy-intensive, and accounts for around 90% of the wattage needed by current x86 chips).

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
 Post subject: Re: I want to know stories about 80286
PostPosted: Fri May 05, 2017 2:27 am 
Offline
Member
Member

Joined: Wed Jul 18, 2007 5:51 am
Posts: 170
Schol-R-LEA wrote:
tom9876543 wrote:
If Intel was smart they would have predicted the growth of the $10k workstation market (SUN etc), and designed a CPU for Unix Workstations (with 32bit flat addressing and paged memory).


They did: the i860. It flopped badly


We are talking about the 80286, which was released in 1982. The 860 isn't relevant because it was released 7 years later.

Intel should have taken the 8086 design and added protected mode paging + 32bit flat addressing. This could have been released in 1982.
It would have been possible to target the IBM PC and the emerging Unix Workstation market.
Instead the 286 was developed with protected mode segmentation that is clumsy and caused major headaches for OS developers in the early 80s.


Top
 Profile  
 
 Post subject: Re: I want to know stories about 80286
PostPosted: Fri May 05, 2017 3:22 am 
Offline
Member
Member

Joined: Tue May 13, 2014 3:02 am
Posts: 280
Location: Private, UK
It's quite obvious that when Intel developed the 80286 in the years up to 1982, they weren't thinking of the IBM PC. The PC had been on the market for less than a year at that point and was just one of several computer designs based around the 8086 and 8088. While it was popular, Intel had no reason to expect that it would go on to the heights that it did.

(Note that the 80186 was not designed for use in general-purpose computers, but as a version of the 8086 that was easier to use in embedded systems. Very few PC compatible computers used it and those that did were mostly portables.)

The lack of a built-in way to exit protected mode is clear evidence that the idea of running older software alongside 286-specific applications wasn't even considered. The fact that it supported real-mode at all was probably more to do with being able to market it to computer vendors as a faster 8086 and letting them transition to 286-specific features later.

The 80386 on the other hand, with its V86 mode and easy switching in/out of protected mode was clearly designed in response to demand for a new CPU for IBM compatible PCs that could continue to run existing applications, especially in multi-tasking environments. Unfortunately, it took until around 1990 for it to gain enough market share for commodity software vendors to seriously consider targeting it.

Sure, if Intel had a crystal ball, they might have developed something more "PC friendly" in 1982 and the 80286 might never have happened. However, what they actually did wasn't unreasonable given what they knew at the time and there's no use getting angry about it 30 years later.

_________________
Image


Top
 Profile  
 
 Post subject: Re: I want to know stories about 80286
PostPosted: Fri May 05, 2017 9:50 pm 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1925
Location: Athens, GA, USA
Mind you, also, that they were working on a workstation-class CPU: the 432. True, they had been working on it for seven years b y 1982, but that's not a long time in terms of CPU design for a novel architecture - their projections were to have the first one released in 1981, which they (just barely) did. There were serious problems with the design, but they thought those were production problems to be ironed out in the next production run (which they continued to think as late as 1985).

This was not an uncommon thing, and other companies - most notably Zilog with the Z8000 and NatSemi with the NS32000 - had similar problems which they tried to brute-force their way through, too (with the same results). And why not? That approach had worked for several other projects at the time, including the Motorola 68000 (which also had a series of production run failures early on); IC fabrication is still often a process that requires a lot of shake-down in new designs, and this was even more true then than it is today. It only looks like a bad idea in hindsight - at the time it seemed like a reasonable path to take.

In 1980, when they started working on the 80286, the 432 project was going swimmingly. The problem they ran into at that point that led them to extend the 8086 was in another project (one which was later canceled, but which was connected to the 8051, I think), and was still in the same embedded space they were targeting with that family all along.

Just to drive a point home about Intel's goals here, the development of the 80186 was started around at the same time as that of the 80286, and the two models were released on the same day IIUC. They 80286 was not a successor to the 80186; they were both branches off of a single tree, going in different directions, and neither of them were aimed at either home computers or workstations.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 21 posts ]  Go to page Previous  1, 2

All times are UTC - 6 hours


Who is online

Users browsing this forum: No registered users and 140 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group