Let's look at the assertions in the first post with a bit of a critical eye, shall we? I don't want to be a wet blanket, or attack your ideas or you personally, but, well, I see a lot of problems with what you are saying, and I think you need to reconsider how you are explaining your ideas, even if the ideas themselves prove sound (which has yet to happen IMAO). I will try to keep this as constructive as possible, but, well, that may prove difficult.
Dawn operating system is a revolutionary technology, the first high-level operating system built for URISC computers.
This isn't an assertion, it is marketing. We can safely ignore it until and unless something is stated to justify the claim.
Other operating systems are designed to run on extremely complex hardware environments, but Dawn operating system is designed for the SUBLEQ architecture, which have no traditional instruction set. This allows a Dawn compatible processor and computer to be designed from a millionth costs than a traditional CPU.
This demonstrates a genuine lack of knowledge about hardware design, and about the costs and effort associated with it. The complexity of the instruction set has almost no bearing at all
on the cost and time needed to implement the ISA; the real costs are in making it run efficiently
. Even a highly CISC ISA such as the VAX can be implemented in a standard FPGA costing $25USD in about a week of work, but such an implementation will have terrible performance.
The fact that undergraduate Computer Architecture courses (such as the one I took at CSU-East Bay) can teach novices to construct a logic-emulated MIPS or DLX processor equivalent to any implementation of that ISA built in the mid-1980s is proof of that implementing an ISA, regardless of complexity, is not a significant issue in CPU design at the professional level. Actually, the ISA itself was not really a factor in the course I took at all, which was mostly focused on the logical components needed by any CPU implementation (adders, barrel shifters, etc. - even an OISC needs at least an adder, a comparator, an instruction counter, and a memory interface, right?). Yes, we needed some hardware for decoding the instructions, which wouldn't be needed in SUBLEQ, but that was almost an afterthought and done mainly in a PLA.
The hard work in developing a new generation of CPU is mostly in development and debugging of the die process
- improving the transistor density is not a small task, and not an automatic one despite the impression Moore's Law might give people. The x86 ISA? 90% of that was worked out in 1978; despite the heroic (and fundamentally futile, something even the companies involved are aware of) efforts Intel and AMD have made to extend its life, the basic ISA hasn't really changed all that much compared to things like the memory addressing, register file size, register width, caching, instruction pipelining, branch prediction (especially
branch prediction!) and MMU - none of which are part of the ISA
, even if they led to some of the changes in it. The ARM and MIPS designs have undergone even fewer changes; in effect, the ISA itself is a done deal.
As an aside, if memory serves, about half the die of the Kaby Lake design is taken up by cache, and about 10% each by the pipeline, instruction re-ordering, instruction simplification (the modern equivalent of microcode), and branch prediction logic. Actually implementing the ISA? Probably less than 5% of the die, even on CPUs with 6 or more cores.
The statement also shows an all-too-common misunderstanding of load/store architecture - the reason 'RISC' is more performant (in principle, though many of the advantages disappear or are less distinct once things like caching, register renaming, multi-path branch prediction, and so on are used) isn't because the ISA is small - several so-called 'Reduced Instruction Set' designs actually have pretty big ISAs - but because all of the instructions can be implemented without microcode; the use of load/store discipline reduces the frequency of memory accesses for data; regularizing the instruction set makes it easier for compilers to target it and optimize the generated code; and eliminating rarely-used instructions means that the whole can be fit onto smaller dies, leading to less propagation delay. The term RISC is really a very misleading and unfortunate one, and the idea that 'URISC' would be somehow inherently even better is a gross misunderstanding of the reasoning behind load/store discipline and elimination of low-usage-frequency instructions. OISC, by its very nature, is not actually a RISC design at all
, because it isn't load/store - the single instruction is actually more CISCy than any of the 56 instructions in the MIPS 2000 ISA.
In practice, an efficient OISC implementation would need any incredibly hairy multi-branch-predicting code/data pipeline that would make Kaby Lake's instruction decoding look like the RCA 1802
's. The die layout would be much
larger and more complex than that of even the current Intel designs.
And, oh yeah, the fact that the code and data are deeply intertwingled means that conventional approaches to memory protection can't be applied. I have no problem with this - I am a Lisp fanatic, after all, so mixing code and data seems natural to me, and as a fan of the Synthesis kernel
design I find the usual approaches to memory protection more an inconvenience than a benefit - but you do need to be aware of the trade-off.
The goal of the Dawn operating system is to bring back the hardware and software development to the people.
What is this even supposed to mean? Especially since, well, 'the people' never had
them in the first place. Seriously, I am all for new operating systems, for empowering users, and for making programming more accessible, but... come on, even if the Alto/Star had been a smashing success and we were all using Pilot OS today, probably less than 2% of users would ever learn Smalltalk, and even fewer would use it to any real extent, and that's on a system that bends over backwards to make it easy for users to write code! My own intended UX design (not OS design - design issues in kernels, ABIs, driver models, system libraries, and user interfaces are completely orthogonal to each other
, even when they are all being built to work together as a whole, so stop confusing them!) follows similar principles, but I am well aware that most people will simply use the tools that are on hand, because programming isn't their goal
, using programs to further their activities is.
Currently, the market is dominated by x86/arm, where a CPU that is capable of blicking the cursor on the screen needs 20 billion usd investment to create, and 30 years of work by 30 corporations and 1 million developers worldwide.
Are you trying to say that the effort made in developing them was wasted somehow? I beg to differ; even the work on improving x86 so that people can put off losing most of the code base for one more design cycle has garnered a wealth of knowledge on CPU implementation (for Intel and AMD, anyway, though I am always pleasantly surprised at how much of it isn't
kept as trade secrets).
In any case, the first assertion of this paragraph is, well, not wrong
, but misleading, because it fails to consider why
those two designs are dominant. Here's a hint: ISA has nothing to do with it. As I have said many times before, everyone - Intel and Microsoft included - have been praying for an opportunity to ditch the x86 ISA almost since it was released (the 8086 was intended as a stopgap until the ill-fated iAPX- 432
was working - which never really happened - and Intel have tried on at least two more occasions to cook up a replacement for it only to see the replacement get ignored by the customers). The reason they haven't has nothing to do with CPUs and everything to do with the installed codebase, and specifically with the way far too many software vendors have written their code to depend on x86 in ways even Intel disapproves of (e.g., using undocumented instructions).
ARM, conversely, is widely used because a) the owners of the IP have given a lot of people inexpensive licensing for the architecture, and b) it lends itself to both low power consumption and low implementation cost. The same is true of several other CPU designs, most notably MIPS, but ARM came out on top mainly due to being readily available when the early PDAs were being built (the owners of the MIPS IP were reticent to license it at the time, and neither SPARC nor Alpha were as suited to the task despite being load/store designs).
Traditional IT corporations created a technologic singularity, which was not able to show up any innovation in the last 20 years.
Well... no. Software developers did that. A stable software base requires a stable hardware platform (or at least a stable way of transitioning between them, something that is less dependent on the hardware and more on the way the software itself is written). The hardware manufacturers themselves are the ones trapped by it.
We arrived to a point, where painting a colored texture on the screen requires 1600 line of initialization and weeks of debugging - and if you are not want to do that, you can only access child toys, design apps, half gbyte sized libraries, and they are still crashing at initialization in most cases.
That has absolutely nothing to do with the hardware. This a fault of the OSes and the languages used.
The internet, global forum of the free speech - actually controlled by the ISP-s and governments.
You do know that free speech was never part of the design goals of the Internet, right? Well, not that it really was designed at all - the various protocols were, but the 'Internet' as a whole, no. Anyway, my point is that 'free speech' on the Internet was due entirely from the way it blindsided the people who came up with it, and the neglect of it by those who might have an interest in monitoring it. Back when 'the Internet' was just a way for academic researchers to sent messages between themselves by piggybacking on what was in reality the medium of the US military's C3 infrastructure, no one thought it would matter what people were saying, so none of those involved put in any checks and balances for either monitoring it, or conversely, for ensuring privacy. This attitude carried over after it was released for public use, and persisted far longer than one would have expected because the people who would want to clamp down on it were ignorant of what it could do, and too busy flinging poo (and impeachments) at each other to notice what was going on.
In other words, had the creators of ARPAnet, NFSnet, milnet, etc. and so forth knew what it would lead to, the projects would have been buried and the records of the proposals destroyed. Free speech isn't a feature, it is a bug, though one which we fortunately can (to some extent) still exploit.
A wifi device is driven by 5 million of source code lines, and nobody actually fully understants, how they work, a TCP stack is 500.000 lines of code. There are no experts at this area any more - even professionals are just typing random things in consoles to get it working if something is broken, hoping that it will randomly cure itself, because they cant debug 30 and 40 million code lines that is responsible for sending a bit on the cable.
All of this is true - but the CPU designs have nothing at all to do with it, nor do the operating system designs. You seem to think that because the OISC itself is simple, and the OS running on it is simple, that the software will be simple - which is a non-sequitur, because the complexity of wifi has nothing to do at all with either of those things. If anything, OISC would worsen
the problem, because of the heroic amount of effort needed to make it do anything!
Dawn operating system is different. Emulating the cpu itself is 6 source code line in C, understanding the hardware set is very simple.
The simplicity of the emulated CPU says nothing at all about the complexity of the hardware underlying
the emulation layer. That wifi router would still need the 5 MLOC, plus
the emulator, because about 150KLOC are needed to talk to the hardware, and 4.85 MLOC is then needed to implement the remote web interface - both of which will be true regardless of the emulation layer or not, since the former would have to be part of the implementation of the emulator itself (or else it wouldn't be able to access the hardware), while the latter would be true regardless of the hardware (or hardware emulation) it is running on.
Dawn operating system itself does not supports any technology, that is enemy of the freedom, while it still offers a nice graphics user-friendly graphics interface with the most common elements.
I really want to snark about the placement of the first comma in that first sentence, as it seems hilariously appropriate. However, I will point out that you seem to be mixing up your assertions - and design goals - here.
It is easy to create hardware and software for Dawn - the hardware design is well documented, and simple.
Create hardware... to support... an operating system?
Dawn have a built-in C compiler that also offers connectivity to the Dawn platform to create textured sprites, texts, play music, get data from the joystics, from webcamera, or manage the files of the computer.
So, it is about as capable as, say, GEOS circa 1985, then? OK, that was uncalled for, sorry.
However, I do need to say that, first off, that has nothing to do with the fact that it is built on an OISC interpreter, as that just pushes the complexity down into the hardware emulation; the emulator becomes just another sort of abstraction layer. Second, you aren't saying anything about how
it is supposed to be easier to write code for (presumably in C, given the prominence you gave the compiler - which means that a lot of the things that make GUI design simple, such as a widget class hierarchy and an event system, won't be readily available...).