OSDev.org

The Place to Start for Operating System Developers
It is currently Thu Mar 28, 2024 7:02 am

All times are UTC - 6 hours




Post new topic Reply to topic  [ 237 posts ]  Go to page Previous  1 ... 11, 12, 13, 14, 15, 16  Next
Author Message
 Post subject:
PostPosted: Fri Mar 02, 2007 8:52 pm 
Offline
Member
Member

Joined: Wed Dec 20, 2006 7:56 pm
Posts: 237
Quote:
What do you need to run a web server? AN OPERATING SYSTEM!
well then i'll just make a web server that runs on the webOS thats hosted on the server on the webOS ](*,)

but i agree fully, all this web junk is junk, sites in all flash, and myspace...!


Top
 Profile  
 
 Post subject:
PostPosted: Fri Mar 02, 2007 10:54 pm 
Offline
Member
Member

Joined: Sun Jan 14, 2007 9:15 pm
Posts: 2566
Location: Sydney, Australia (I come from a land down under!)
Let's face it, who cares if you can code a really great website? I mean, all it takes is a little bit of design sense and a small understanding of the server-client connection.

I think what takes real skill is OSdev, or database system design or something that actually takes time and effort. I can put together a fully functional, AJAX-enabled website with all the trimmings in a couple of hours. But I'm still only just getting my OS started, and that's 3 months of work...

_________________
Pedigree | GitHub | Twitter | LinkedIn


Top
 Profile  
 
 Post subject:
PostPosted: Sat Mar 03, 2007 10:37 am 
Offline
Member
Member
User avatar

Joined: Fri Jan 27, 2006 12:00 am
Posts: 1444
I think (other than Alboin and candy ) some of you may of mist my point.

1. I did not say you do not need a OS, i said OS will be valueless (in money terms).
You say you need a OS to run your server, Yes, but there are many free or cheap servers, even servers in a single chip : http://d116.com/ace/

2. In stead of sticking your heads in the sand, you could use this with your OS's.
Example of this could be, load a file you need the right driver for the file sys, but what if you loaded a file from here: http://www.4shared.com/
The the free ver ;), what file sys do they use ?.
Well it does not matter, to you does it. NOTE: When most of the money is made from services, including M$, It will be in the own favour to make it compatable with as many devices, also remember we are not talking just PC here, your tv will stream video from a network, your radio will stream radio from the net etc
Examples
http://www.babilim.co.uk/blog/2006/09/a ... in-uk.html
http://www.acoustic-energy.co.uk/Produc ... o/WiFi.asp
Here some i use with my OS :
http://www.maplin.co.uk/module.aspx?det ... #more_info

3. One thing we all agree on is that flash is s**t, and to date most people who get into web OS are people who would like to code a normal OS, but know no low level stuff.
This is were we come in, I be leave that we are in the best potion to make this work.

Eg: Most of us do not like like Flash etc, So we should be coming up with something better. remember all the most successfully standares are based on very simple code, eg: tcp/ip

If your still not convinced read this, he put it in a more elegant way
http://blog.hangerhead.com/2006/12/defi ... s-api.html

PS: Not a day goes by with out more proof ;) http://developers.slashdot.org/develope ... 0233.shtml


Top
 Profile  
 
 Post subject:
PostPosted: Sat Mar 03, 2007 9:36 pm 
Offline
Member
Member

Joined: Tue Nov 07, 2006 7:37 am
Posts: 514
Location: York, England
@Dex:

Im quite disturbed that as i read your last post all i could think of was Brynet...

I don't disagree that this is where the Internet is going, and eventually it will cause a decrease in the Webs use by developers and other people who use Internet Bandwith for useful purposes. I simply wish you would not mix up the idea of an operating system, and a web page that imitates a desktop.


Top
 Profile  
 
 Post subject:
PostPosted: Sat Mar 03, 2007 10:29 pm 
Offline
Member
Member
User avatar

Joined: Tue Oct 17, 2006 9:29 pm
Posts: 2426
Location: Canada
Tyler wrote:
Im quite disturbed that as i read your last post all i could think of was Brynet...


What's that supposed to mean? :?

_________________
Image
Twitter: @canadianbryan. Award by smcerm, I stole it. Original was larger.


Top
 Profile  
 
 Post subject:
PostPosted: Sun Mar 04, 2007 4:18 am 
Offline
Member
Member
User avatar

Joined: Wed Feb 07, 2007 1:45 pm
Posts: 1401
Location: Eugene, OR, US
Brendan wrote:
32-bit 80x86 Apple computers use EFI already.


Well, then either it's going to be a standard, or Apple has jumped the gun again, and screwed itself again. I haven't looked through the EFI spec completely, but it uses too much space for partitions, etc., and I'm really scared that it will do a really stupid job of not passing me enough hardware specific data about the machine. And may not support dual-booting (multi-booting) and alternative OSes properly or prettily.

Brendan wrote:
The main problem isn't competition between manufacturers, but the addition of new features.


Exactly. So you always have to pick a minimum level of chip to support, and then deny support of new features after that point. You picked the 1st gen of 64 bit chips as your minimum level -- I picked the P6 generation of the older series. But neither one of us is going to be supporting the NEXT set of new features after this. So it's the same deal, except you support one design level higher than me. If you keep trying to support the newest features, that means your OS will never be finalized.

Brendan wrote:
In the last 7 years I've progressed a lot - I learnt how crappy my OS was ....


I think it would be extremely educational if you were to create a posting on "advanced kernel design" that specified what issues you found in your 1st version that were crappy and needed a redo. I saw your post on the other thread about your IPC message buffers being too big, for one thing.

Brendan wrote:
I doubt anyone buying a new 80x86 desktop/server will get a 32-bit CPU.


LOL! E-Machines is a huge moneymaker, selling 32bit machines for $400 each. I would bet money that 75 to 80% of machines sold in '07 will be cheap 32 bit, until 64bit and multi-core machines come down below $600.

Brendan wrote:
How many people actually use 10 year old computers now?


My second machine is a 6 year old K2/266 that I use for getting stock quotes, bittorrent, MP3 downloads, and CD burning. I don't see any reason that I'll stop using it in the next 4 years. Especially if I switch my main PC to my own OS -- I'll probably want to use Windoze sometimes without shutting my main PC down, I suppose.

Brendan wrote:
When an "old" file that is being actively modified is allocated new clusters from the 8K cluster pool, is the entire file copied into new unfragmented space from the 8k cluster pool, or is the file fragmented?


"Fragmentation" is not a black and white concept, usually. It is (of course) possible to have a perfectly contiguous file -- but that is not my intention. I think it is easiest to measure fragmentation as the number of breaks in contiguity divided by the total number of sectors. My original example was of a file that had 2048 sectors, and 11 contiguitiy breaks, for a .54% fragmentation ratio.
You suggested appending 2K -- which would be allocated as a single new 8K cluster, as I said (without a file rewrite). So the file would now be 2052 sectors, with 12 contiguity breaks -- a .58% fragmentation ratio. Then the optimizer would rewrite the file as a single 1M cluster, plus a 2K cluster, with only 1 contiguity break -- a .05% fragmentation ratio.

The point of my filesystem is not that files are perfectly contiguous -- it is that it is completely impossible for them to get so messily fragmented that the disk needs defragging.

Brendan wrote:
... imagine you've got a sound card driver that occasionally trashes the disk driver's shared memory, and your file systems and/or swap space are occasionally being corrupted.


Yes, we will see how much I can easily prevent. The priority is protecting user info on the hard disk, of course. The filesystem will be walled off. But a driver trampling on other drivers is worthy of a big fat error message, to let the user know they have installed an incompatible piece of equipment -- which must be immediately removed. I don't see a need to be anal about protecting the system from misbehaving drivers for anything more than very short periods of time.

RE: web based computers

Yes, most computery things that most people do will be possible online, using handhelds with a minimal amount of solid state storage, and minimal hard-coded OSes. But UNIX earned AT&T billions, MacOS earned Apple billions, Windoze earned M$ billions, and Linux is in the early stages of earning lots for RedHat, etc. Clearly, the economic value of OSes will eventually fade away (most likely when we have mini-AI systems that respond to voice commands, and don't need a well defined OS interface) -- that time is not nearly here YET.


Top
 Profile  
 
 Post subject:
PostPosted: Sun Mar 04, 2007 10:22 am 
Offline
Member
Member
User avatar

Joined: Fri Jan 27, 2006 12:00 am
Posts: 1444
Tyler wrote:
@Dex:

Im quite disturbed that as i read your last post all i could think of was Brynet...

I don't disagree that this is where the Internet is going, and eventually it will cause a decrease in the Webs use by developers and other people who use Internet Bandwith for useful purposes. I simply wish you would not mix up the idea of an operating system, and a web page that imitates a desktop.

@Tyler, Please read this, has my point is not getting through:

I agree that, so called WEB OS are not real OS and they are made by coder who do not, in the main understand low level stuff.
But that is where we OS Dev's come in, we need to make a true WEB OS, from the ground up ;) .
For example we need to make a bootable web browser, that users servers (even for local stuff on the same PC). You will probely say but i can easy just make a linux distros, sure you can, but if you code your own, you can taler it to a WEB OS more.
This Web OS could be good for OS dev,s, eg: now you could make a great OS, but if theres no user programs, nobody will use it :(, but if you code a good browser, you will have a long list of user program :).

You will even have Adobe Photoshop :
http://uk.news.yahoo.com/01032007/152/a ... nline.html


Top
 Profile  
 
 Post subject:
PostPosted: Sun Mar 04, 2007 10:36 am 
Offline
Member
Member

Joined: Tue Nov 07, 2006 7:37 am
Posts: 514
Location: York, England
This reminds me of my very first operating system design. I had been given a huge lecture about 6 years ago from this guy who was very big on SETI@home about how future operating systems would have all applications distributed over the net. Of course i realised then that computing power increase would always be greater than that of the internet and make the speed loss impractical when people can afford perfectly good local harddrives.

Perhaps though Dex, an extension of the idea of a web browser would be appropriate? My operating system supports an extended HTTP protocols i hope to implement in at least my own servers. It is a session HTTP protocol with extensions to the session outside of the idea of a one to one, website and user. I do not know how quite to support the idea of a general session with the internet yet, but there ya go. It also supports bytecode HTML.


Last edited by Tyler on Sun Mar 04, 2007 11:54 am, edited 1 time in total.

Top
 Profile  
 
 Post subject:
PostPosted: Sun Mar 04, 2007 11:13 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

bewing wrote:
Brendan wrote:
32-bit 80x86 Apple computers use EFI already.


Well, then either it's going to be a standard, or Apple has jumped the gun again, and screwed itself again. I haven't looked through the EFI spec completely, but it uses too much space for partitions, etc., and I'm really scared that it will do a really stupid job of not passing me enough hardware specific data about the machine. And may not support dual-booting (multi-booting) and alternative OSes properly or prettily.


EFI is a standard, but it's a complex one - it's almost like there's a "mini-OS" being used to boot the real OS, with full (optional) command line interface, it's own device drivers, it's own bootable applications, etc. Part of the idea is that people can write OSs in high level languages and recompile them for different platforms with minimal changes to the boot code, etc (where "different platforms" means some 80x86 machines and all Itanium machines). Support for dual-booting isn't a problem though...

bewing wrote:
Brendan wrote:
The main problem isn't competition between manufacturers, but the addition of new features.


Exactly. So you always have to pick a minimum level of chip to support, and then deny support of new features after that point.


If you say the minimum requirement is a P6 you can't assume a P6 (and all the features "P6" includes) are present - you have to test and make sure, because otherwise someone will try to run it on a Pentium and complain that it crashed. This isn't necessarily as easy as it sounds - for e.g. if you get the CPU family from CPUID and make sure it's 6 or higher, but then the CPU could be anything from a dodgy Cyrix chip (which lacks some of the features you'd expect in an Intel 80686) to the latest Core Duo or Opteron.

From a "commercial product" perspective it then becomes difficult to get people to understand exactly what is required. If you claim the OS works on any Intel Pentium Pro or later CPU, then does that include a Cyrix MediaGX or AMD K6? If you explicitly state each feature that the OS requires (PAE, PSE, MMX/SSE1, etc) then people will just get confused (ask the average computer user what sort of CPU they've got and they'll probably say something like "I don't know, but it says 'Dell' on the front" - this is the main reason for Microsoft's "Vista Ready" marketting you'll see if you're looking to buy a new computer).

I guess this mostly depends on the target market for your OS and the OS's design. From my perspective, anyone trying to get into the desktop/server market is going to have a hard time - it'd be much easier to get your OS used in (for e.g.) embedded systems, where 80486 is still being used. Because of this I set 80486 as my minimum requirement and hope that my code will be usable for both embedded systems and desktop/server, so that it could gain users in embedded and then grow into desktop/server from there (rather than starting with direct competition with Microsoft and Linux - a strategy I'd call "suicide").

So, with 80486 as my minimum requirement can I realistically expect a desktop or server user to run an OS that doesn't support SMP, PAE, SSE, write-combining and other newer features? Quite frankly, no (after many years of work I wouldn't be able to recommend my own OS without being dishonest).

With this in mind you end up setting the minimum CPU required *and* the maximum CPU supported properly. For example, an OS could be designed for P6 to Pentium 4, where it'll take advantage of new features introduced in Pentium 4 (and will actually run on the a brand new CoreDuo, but won't take advantage of new features introduced with CoreDuo).

In general, limiting the range of properly supported CPUs is similar to limiting the quality of the OS. It makes the OS easier to design and implement, but makes it harder to find anyone willing to actually use the OS (i.e. it limits the target market); and to be perfectly honest, except for educational purposes, there's no point wasting your time writing an OS if you don't at least hope that someone might actually use it one day. To me this means that once you go beyond "educational purposes" you really can't afford to limit the OS if it's not necessary, and therefore can't afford to deny support of new features if it's not necessary.

Now if you understand my rambling (even if you don't necessarily agree), then it's time for what I'd consider the single most important thing I've learnt about developing OSs in the last ten years or more: the code is worthless, the design is everything.

What I mean by this is that it's possible to create an OS design that is capable of being extended, and that failing to allow for extending the OS is the worst possible thing you can do.

Basically the OS must be designed to handle everything conceivable (including new CPU features, new hardware, new ways of doing things, different ways of booting, etc) while the OS's implementation will only support a limited subset of these things at any given point in time. The opposite would be an OS that is designed for a limited subset of things that will probably need to be (at least partially) redesigned and rewritten every time you need to support something new.

bewing wrote:
You picked the 1st gen of 64 bit chips as your minimum level -- I picked the P6 generation of the older series. But neither one of us is going to be supporting the NEXT set of new features after this. So it's the same deal, except you support one design level higher than me. If you keep trying to support the newest features, that means your OS will never be finalized.


No.

My OS is (hopefully) designed to handle everything conceivable, and I'm supporting everything from 80486 to stuff that hasn't been invented yet. There are exceptions to this - inserting and removing hot plug CPUs, and the removal (but not insertion) of hot-plug RAM. My justification for this is that the OS is designed for redundancy, so turning a computer off to change CPUs or remove RAM shouldn't be a problem (even for "mission critical" servers).

If I keep trying to support the newest features then my OS may never be finalized, but if I fail to try there's no point finalizing it (it'll be worthless/obsolete anyway), unless I start again from scratch or do major redesign & changes (which takes longer than trying to support the newest features).

bewing wrote:
Brendan wrote:
In the last 7 years I've progressed a lot - I learnt how crappy my OS was ....


I think it would be extremely educational if you were to create a posting on "advanced kernel design" that specified what issues you found in your 1st version that were crappy and needed a redo. I saw your post on the other thread about your IPC message buffers being too big, for one thing.


My biggest mistake would be taking too long to realise that the OS must be designed to handle everything conceivable.

My initial designs were mostly "self-education", but the last in the series could be described as "single CPU, single user, designed for 80386 only". I decided I needed to support PAE and the code wasn't extendable enough and there were other things I didn't quite like, so I rewrote.

The next was "single CPU, single user, designed for 80386 to Pentium II". I decided I needed to support multi-user and the code wasn't extendable enough and there were other things I didn't quite like, so I rewrote.

The next was "single CPU, multi-user, designed for 80386 to Pentium III". I decided I needed to support SMP and the code wasn't extendable enough and there were other things I didn't quite like, so I rewrote.

The next was "multi-CPU, multi-user, designed for 80386 to (32-bit) Pentium 4". I decided I needed to change the scheduler and the code wasn't extendable enough and there were other things I didn't quite like, so I rewrote.

The next was "modular, multi-CPU, multi-user, designed for 80486 to (64-bit) Pentium 4". It was mostly an experiment that wasn't anywhere near as complete as the others, but I decided I needed to support headless systems and diskless systems and the (small) amount of code I had wasn't extendable enough and there were other things I didn't quite like, so I rewrote.

Of course there were a lot of smaller things I didn't like (but they all could've been fixed if the OS was designed to be extendable).

One of them was "variable frequency scheduling". The idea is that all tasks get the same length time slices, but higher priority tasks get more of them - for example a task might get one 2 ms time slice and another task with twice as much priority might get 2 time slices at 2 mS each. This worked very well for my earlier OSs and gave very smooth and fair scheduling, but the problem is deciding who gets the CPU next. The best algorithm I could come up with (worst case) involved scanning a table of "ready to run" tasks twice - an "O(2*N)" algorithm, which sucked when there's a large number of tasks that are ready to run. I still really like "variable frequency scheduling" though (but I won't be using it again either).

Another was letting user-level code access (read-only) data structures in kernel space. It sounds nice and easy - for e.g. an application calls the kernel API and the kernel returns a pointer to FOO, or the OS is designed so that an application can directly read FOO without calling the kernel API (saves some cycles). It's a bad idea though - as soon as you change the address of FOO or try to get 32-bit applications to run on a 64-bit kernel it breaks.

Assuming that the OS can always have direct access to video and keyboard is another bad idea. Previous versions of my OS had full graphical menu driven boot code where (during installation) the OS expected the user to make decisions before the kernel was started. If the OS is running on a machine with a touch screen and no keyboard, or it's a headless machine (no video and keyboard) then you end up with an OS than can't boot.

Supporting SMP is another one. Now and then I hear people say that it's easy to add SMP to an existing (single-CPU) OS - don't believe them (they haven't tried it). What you end up with is huge re-entrancy problems (deadlocks) and/or bad performance due to lock contention, and you'll need to redesign just about everything to fix it.

Then there's little annoyances - some memory management code that could be optimized a bit, a scheduler that doesn't handle something just right, some messy and badly documented boot code, etc. By themselves they're all fixable, but if you don't fix them as soon as possible they start to add up. Before long "Yay, my nice clean OS" starts to become "Hmm, that OS thing again" and you start looking closer and start finding more little annoyances. That's when you realise there's a larger problem somewhere, or when Intel and AMD introduce some new feature (like virtualization extensions).

Anyway, my newest design similar to the last, just more modular - e.g. at least 7 different/seperate modules before it leaves real mode (boot loader, boot manager, user interface, CPU detection, physical address space manager, domain manager, kernel setup code), a micro-kernel made from at least 7 different/seperate modules (CPU manager, physical memory manager, linear memory manager, scheduler, IPC, I/O manager, critical error handler), with explicit "specifications" for the interfaces between each module and no hidden dependancies that'll be forgotten about a month later (e.g. IPC code that directly changes page table entries so everything breaks when you change something in the linear memory manager).

I guess you could say I've finally learnt to make the OS extendable. The idea is that anyone will be able to replace any module without looking at any of the code for any other modules, just by using the specifications and documentation.

For example, currently there's 3 different boot loaders that can be used (one for booting from floppy, one for booting from a network via. PXE, one for booting from ROM) where none of the other code cares which is used. Then there's 2 seperate user interface modules (one for keyboard/video and one for serial/dumb terminal). There's also a "pretend" boot manager that actually decompresses the boot image and starts the real boot manager, so that I can (recursively) have compressed boot images where none of the other code cares if the boot image is/was compressed or not.

If someone wants to write a new user interface module that uses the sound card (speakers and microphone) and speech recognition, then that won't be a problem. If someone wants to boot the OS from thier home made PCI card that's easy enough too. They will need to write the code for it, but they won't need to learn all the messy details of every other piece of code before they do.

This will continue - I'll have several linear memory managers, several schedulers, several IPC modules, etc. If someone writes a much more efficient IPC module or wants to do research on different scheduling algorithms, then they can just put their module into a standard boot image.

More importantly, if I stuff something up or find out something has performance problems (or discover a better/faster way to do something) I can replace that part without caring about the rest of the OS, if AMD invent a new way of doing paging in long mode it'll be easy for me to support it, and when Intel release "super extended hyper-media enhancement technology" I'll be detecting it and using it before most people figure out what it actually is.

bewing wrote:
Brendan wrote:
I doubt anyone buying a new 80x86 desktop/server will get a 32-bit CPU.


LOL! E-Machines is a huge moneymaker, selling 32bit machines for $400 each. I would bet money that 75 to 80% of machines sold in '07 will be cheap 32 bit, until 64bit and multi-core machines come down below $600.


You might want to double check that.

For an example, have a look at this $500 computer from eMachines that comes with this Intel Celeron D 356 (which is actually a 65nm "Pentium 4" class CPU with a 512 KB L2 cache running at 3.33 GHz with a 533 MHz front side bus that supports 64-bit and "No eXecute")....

bewing wrote:
RE: web based computers


Once upon a time (a long time ago) people used mainframes with dumb terminals. They hated it, but it was economical because computers were expensive and took up several rooms. Since then every 10 years or so some smart twit decides to re-invent something similar (IIRC the last twit was Sun trying to push "thin-clients", although to be fair it's a little different when you own both the servers and the clients).

In general it doesn't work, just like convincing people to use public transport doesn't work - it's not "theirs". As the price of hardware keeps dropping the viability of a "network OS" for the general public will keep decreasing (excluding temporary fads).

The technology to do it has been around for about 30 years - you only need an X client (or VNC client) on each computer, large enough server/s and enough bandwidth. Of course the idea of using a large bloated layer of HTML is new, but I can't see how worse technology will make a bad idea better (excluding temporary fads).


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject:
PostPosted: Sun Mar 04, 2007 11:44 am 
Offline
Member
Member
User avatar

Joined: Tue Oct 17, 2006 9:29 pm
Posts: 2426
Location: Canada
Dex wrote:
I agree that, so called WEB OS are not real OS and they are made by coder who do not, in the main understand low level stuff.
But that is where we OS Dev's come in, we need to make a true WEB OS, from the ground up ;) .
For example we need to make a bootable web browser, that users servers (even for local stuff on the same PC). You will probely say but i can easy just make a linux distros, sure you can, but if you code your own, you can taler it to a WEB OS more.
This Web OS could be good for OS dev,s, eg: now you could make a great OS, but if theres no user programs, nobody will use it :(, but if you code a good browser, you will have a long list of user program :).

You will even have Adobe Photoshop :
http://uk.news.yahoo.com/01032007/152/a ... nline.html


This is really a flawed outlook on things, First of all a light BSD/Linux installation with a web browser is far better than writing a so called "bootable" browser.

Writing an entire rendering engine for your web browser will be extremely complicated.. (Even more so if you plan to do it in Assembly..).

Adobe's Photoshop will obviously!! use Adobe flash.. and I seriously doubt you'll be able to write your own flash renderer.. There are also other annoyances like Java and Javascript..

The entire "Web OS" theories are lame.. I'd stick to booting BSD or Linux and using a browser with a existing core rendering engine (Gecko or KHTML..)..

I laugh at the people who write those lame "Web OS's" in Javascript or flash.. They usually include a little extra ASP or PHP scripting for any random dynamic content.

:roll: grow up..

_________________
Image
Twitter: @canadianbryan. Award by smcerm, I stole it. Original was larger.


Top
 Profile  
 
 Post subject:
PostPosted: Sun Mar 04, 2007 1:36 pm 
Offline
Member
Member
User avatar

Joined: Tue Oct 17, 2006 6:06 pm
Posts: 1437
Location: Vancouver, BC, Canada
On Web OSes:
I agree that Web "OSes" are lame... I think they are really a reaction based on a growing realization that too many people's files and day-to-day work environments are completely owned and dictated by one company (M$) and can't be moved from place to place easily. I think it's obvious to most of us that there are better technical solutions to this problem.

Personally I find the idea of web-based productivity apps to be insane. Am I really going to edit spreadsheets in my browser and save my personal finanicial information on Google's servers? Gimme a break.


On Brendan's many redesigns:
With all due respect Brendan, I think your many rewrites have something to do with the fact that you always write your kernels in 100% assembler.

Before the foaming-at-the-mouth asm hackers jump down my throat, please allow me to explain myself.

I have nothing against asm, and if people like to implement everything in asm, then more power to them. My point is that when using assembler, you have to be extra careful to keep your code modular. There are no abstractions to help you that you don't invent yourself. Brendan, your modules idea is exactly that -- a way to impose modularity on your chosen implementation language.

"Code is nothing, design is everything" -- here I agree mostly, but my point about asm above is that if you use a higher-level language, it is easier to reuse your old code in new contexts.

About needing to design for every possible eventuality -- I think this is impossible in general, but it certainly is a good idea to think through all the possibilities. Here's an example of something I think is very difficult to anticipate in OS design: Which will become more popular over the next ten years, symmetric or asymmetric chip multi-processing? What I mean is, will we all have 1000-core chips that all have the same instruction set, or will it be more like the Cell where there is one type of core for controlling other, more specialized cores? IMO it's too difficult to design an OS today that can handle who-knows-what tomorrow.

A more practical approach is to design your OS for a certain set of high-level assumptions (e.g. -- NUMA, symmetric multi-core with support for other core types via drivers, up to 1000 cores in a single system, etc.) and then design it to encapsulate the hardware-specific details (32- or 64-bit, virtualization or not, etc.). Otherwise you'll be rewriting forever and go completely mad.

_________________
Top three reasons why my OS project died:
  1. Too much overtime at work
  2. Got married
  3. My brain got stuck in an infinite loop while trying to design the memory manager
Don't let this happen to you!


Top
 Profile  
 
 Post subject:
PostPosted: Sun Mar 04, 2007 4:15 pm 
Offline
Member
Member
User avatar

Joined: Sat Oct 23, 2004 11:00 pm
Posts: 1223
Location: Sweden
Why would it be harder to reuse code in asm? Think about it.. :roll:


Top
 Profile  
 
 Post subject:
PostPosted: Sun Mar 04, 2007 4:41 pm 
Offline
Member
Member
User avatar

Joined: Wed Oct 18, 2006 3:45 am
Posts: 9301
Location: On the balcony, where I can actually keep 1½m distance
ASM
You can easily reuse lots of asm code if it was properly written in the first place.
My current rewrite reuses lots of code from my previous kernel. (actually, so far only new features are written from scratch)
But then again you must be pretty competent to write your code in such a fashion in such low-level language. C simplifies that a bit, Object-oriented languages make that easily achievable at n00b level. Not to mention functional languages.

Web OS
Http isnt everything, and neither is just having firefox or MyOwnBrowserNamedXXXX. one could consider it fundamentally flawed for the purpose. Which is why there exists more protocols than just http - many people have used rtsp (or the m$ clone called mms) without realizing the www consortium has nothing to do with that.
Not to speak of Secure Shells and the things you could pull off with that.

Besides, we should be careful it doesnt become an invitation for a new generation of cracker wars. Reading george bush's secret agenda and that sort of things because they suddenly happen to be on some server that's too late with the security patches. :evil:

BCOS
Its indeed technically impossible to be capable of supporting all inventions. Still a good design makes the changes needed to support something non-trivial and unexpected far less difficult, and I have the idea thats what brendan's opting for.

_________________
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]


Top
 Profile  
 
 Post subject:
PostPosted: Sun Mar 04, 2007 7:22 pm 
Offline
Member
Member
User avatar

Joined: Tue Oct 17, 2006 6:06 pm
Posts: 1437
Location: Vancouver, BC, Canada
bubach wrote:
Why would it be harder to reuse code in asm? Think about it.. :roll:


It's not that it's harder to reuse code -- it's harder to write reusable code in the first place. The reason is simple -- in asm, you are responsible for all your own abstractions. In a high-level language, the language itself provides abstractions for you to use that make code more reusable.

Here's an exaggerated example just to show what I mean about the structure of asm. In asm, you can manage the stack however you like. In C, you're stuck with C's calling convention, but basically you don't have to worry about it unless you're integrating with some asm code. Imagine if you went nuts in your asm code and used different calling conventions everywhere (not that experts would do this, but n00bs might). How reusable is code written with an arbitrary calling convention? This is not meant to be an example of a typical situation -- I'm just pointing out that nearly everything in asm above the individual instructions is established by convention rather than by the rules of the language itself.

I guess what I'm trying to say is that writing reusable asm requires far more self-discipline. Any development process requiring that much self-discipline is inherently not scalable.

_________________
Top three reasons why my OS project died:
  1. Too much overtime at work
  2. Got married
  3. My brain got stuck in an infinite loop while trying to design the memory manager
Don't let this happen to you!


Top
 Profile  
 
 Post subject:
PostPosted: Sun Mar 04, 2007 7:37 pm 
Offline
Member
Member
User avatar

Joined: Sat Oct 23, 2004 11:00 pm
Posts: 1223
Location: Sweden
even if i would try to reuse code with diffrent types of calling conventions, it's not really much of a problem in asm, just use the method any comments for that function tells you to.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 237 posts ]  Go to page Previous  1 ... 11, 12, 13, 14, 15, 16  Next

All times are UTC - 6 hours


Who is online

Users browsing this forum: SemrushBot [Bot] and 58 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group