OSDev.org

The Place to Start for Operating System Developers
It is currently Wed Apr 24, 2024 7:48 am

All times are UTC - 6 hours




Post new topic Reply to topic  [ 51 posts ]  Go to page Previous  1, 2, 3, 4  Next
Author Message
 Post subject:
PostPosted: Sun Sep 30, 2007 1:35 pm 
Offline
Member
Member

Joined: Thu Oct 21, 2004 11:00 pm
Posts: 248
Avarok wrote:
1) It gets mapped by the memory driver; because only ring 0 has the privilege level to maintain page tables.

Well duh, the memory driver. But how does the memory driver know what to map and where? I thought you weren't going to use a system call for this purpose, just pass a pointer and a length to the target process directly.

Quote:
2) In a 64-bit implementation with 16 exabytes of addressable space, if you can't align array allocations you're doing something wrong.

So basically your OS will arbitrarily enforce page-alignment on the user for its own convenience.

Quote:
3) You map it somewhere else. You're passing a { void* ptr; ulong length; } anyways. I said so already.

Right, but you're passing it program-to-program instead of through a kernel message-passing system call. So how the hell does the kernel get a hold of it in the first place?

Quote:
I was hoping you'd ask the more challenging questions, like how do you handle calls that don't return, or how to you account cpu time across these boundaries.

Always lovely to see fools condescending to the wise.

Unlike you, I actually designed and built a portal system. I ran into numerous problems and decided to abandon the effort for a special case that user-land could build up into a more complete expressive system.

You, on the other hand, have walked in from nowhere and declared that you will build this thing that passes a memory buffer via a pointer-and-length directly from process to process *without going through a system call* and yet your "memory driver" (as though such a thing existed; you mean memory manager) will map everything through for you.

Now of course, this sounds like a special case of the kind of portal system I implemented, but you have not spoken of any concept remotely related to portals or their implementation -- including important problems such as "where do portals come from in the first place", "who can call this portal", and "how are portals named".

Please, try considering how your castles built in air will actually look in stone, brick and mortar before telling anyone they should have asked harder questions.

The questions you think difficult are in fact easy. One doesn't handle portals that don't return. The risk of not returning is inherent in the most meagre subroutine call in any Turing-complete language, so I don't see why the operating system should protect against that risk on inter-process calls.

Second of all, one accounts for CPU time spent on inter-process portal calls the same way one accounts for it on normal subroutine calls. During a portal call, the current (running) thread (or continuation, in such a system) does not actually change. Instead, it merely moves between address spaces (or whatever other isolation domains you use).


Top
 Profile  
 
 Post subject:
PostPosted: Fri Oct 05, 2007 5:18 am 
Offline
Member
Member

Joined: Thu Aug 30, 2007 9:09 pm
Posts: 102
Quote:
Well duh, the memory driver. But how does the memory driver know what to map and where? I thought you weren't going to use a system call for this purpose, just pass a pointer and a length to the target process directly.


Haven't put a huge amount of thought into this yet. Ultimately, you need to explicitly allow the memory map from the caller before control is transferred for security reasons, and then access (and thus implicitly accept) the shared memory from the callee.

Quote:
So basically your OS will arbitrarily enforce page-alignment on the user for its own convenience.


Yes. The idea in my mind is that code and static data are statically allocated in a tightly packed fashion, while dynamically allocated memory is provided when the user calls a "malloc" equivalent. The malloc equivalent can align to whatever pleases it, including page aligned. Since a 64-bit address space provides enough for 4503599627370496 pages, I thought that was more than reasonable since it allows you to cheaply map space around instead of copying; and bounds checking to within a page is cheap; and such checking prevents access to other used memory accidentally.

Quote:
Right, but you're passing it program-to-program instead of through a kernel message-passing system call. So how the hell does the kernel get a hold of it in the first place?


Didn't you just chastise me for passing that to the kernel before the call?

Quote:
Always lovely to see fools condescending to the wise.


:roll: The wise don't develop their own OS for the sake of mental masturbation. They retire somewhere awesome and wave at the school bus. I would argue us all fools.

Quote:
... and yet your "memory driver" (as though such a thing existed; you mean memory manager) ...


I distinguish the term only because my implementation is not a daemon, but a library which is called to perform the operation of some hardware. I find it odd how we've stuck to names that are derived from their initial implementation details.

For example, define the one thing that is a kernel? Why is it even a word? Is it the security layer, the HAL, the collection of drivers, the shell, what? Few draw the same lines, so the word is so ambiguous as to be meaningless. Yet you confront me over my choice of the word 'driver' as if it somehow was a difficult term I had myself coined.

Quote:
The risk of not returning is inherent in the most meagre subroutine call in any Turing-complete language, so I don't see why the operating system should protect against that risk on inter-process calls.


Hmm... processor time hijacking comes to mind, but perhaps that's a risk inherent when you call something foreign. My line of thought stemmed from the perception that perhaps the OS was meant to secure programs from eachother. Some risks are inherent, as you say; so perhaps it's really not worth it.

Quote:
Second of all, one accounts for CPU time spent on inter-process portal calls the same way one accounts for it on normal subroutine calls. During a portal call, the current (running) thread (or continuation, in such a system) does not actually change. Instead, it merely moves between address spaces (or whatever other isolation domains you use).


I think you've failed to express how you're keeping track of CPU usage.

I think you've explained Yet Another Way(tm) to context switch.

_________________
There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.
- C. A. R. Hoare


Top
 Profile  
 
 Post subject:
PostPosted: Sat Oct 06, 2007 12:58 am 
Offline
Member
Member

Joined: Thu Oct 21, 2004 11:00 pm
Posts: 248
One accounts for CPU usage however one accounts for it in the scheduler. CPU usage during an inter-process procedure call is not fundamentally different from CPU usage running the original process's code.

And I didn't express Yet Another context-switching method. Yes, I use context switching (actually something similar to it but not exactly) to move a thread between processes when it makes an inter-process call, but doing so does not constitute a context switch. It does more than a context switch, since it moves the stack between processes.


Top
 Profile  
 
 Post subject:
PostPosted: Sun Oct 21, 2007 9:53 pm 
Offline
Member
Member

Joined: Thu Oct 21, 2004 11:00 pm
Posts: 248
I actually read about something called Boxes (look for "Boxes as a Replacement for Files"), but I'm not sure I like it.

A box is a node in a tree. It has an array of bytes as contents, a type signature of some kind, and links to a parent box and child boxes (making a tree of boxes).

The basic operations on boxes are:

copy(source,dest) == copies the box named by source into dest
select(box,child_name) == returns a handle to the named child of box that can be used with other box calls. Serves as a kind of open() call, including for the purposes of box creation (but how does actual data and type get associated with the boxes?)
share(source,dest) == Rebinds the namespace so that the box at source is bound to the name/handle at dest. Substitutes for link(), bind() (from Plan 9), and mount().

The authors of the paper purport a "null box" that, when copied into a box, makes that destination box delete itself. I think an unshare operation would probably work better.

The authors also propose that boxes be strongly typed. Type conflicts are resolved by rerouting the proposed box operation through a set of "type converters" associated with the namespace.

I think this kind of system looks very good in theory. It could be used to create a 9P-like protocol for uniformly describing operations over a network, and it would certainly resolve the old dichotomy between files and directories (so that metadata could be stored as sub-files of a file).

What does everyone else think?


Top
 Profile  
 
 Post subject:
PostPosted: Sun Oct 21, 2007 10:03 pm 
Offline
Member
Member

Joined: Sun Jun 18, 2006 7:21 pm
Posts: 260
Crazed123 wrote:
I think this kind of system looks very good in theory. It could be used to create a 9P-like protocol for uniformly describing operations over a network, and it would certainly resolve the old dichotomy between files and directories (so that metadata could be stored as sub-files of a file).

What does everyone else think?


Is providing this type of abstraction worth the performance cost that it will incur???

I think any storage concept that intends to deviate from the standard directory/file concept should look heavily into providing more of a database-like interface. The above "Boxes as Replacements for Files" sounds like it is trying to exist right in between the two concepts, providing very little advantage over directory/file concept. for virtually the performance cost of the database concept.


Top
 Profile  
 
 Post subject:
PostPosted: Sun Oct 21, 2007 10:27 pm 
Offline
Member
Member

Joined: Thu Oct 21, 2004 11:00 pm
Posts: 248
Why, exactly, would it have an extra performance cost?


Top
 Profile  
 
 Post subject:
PostPosted: Mon Oct 22, 2007 5:03 am 
Offline
Member
Member

Joined: Thu Aug 30, 2007 9:09 pm
Posts: 102
:D Wow...

I like that Crazed. I'm 100% in agreement with this Box thing. The API should be simple, but never so simple that you have to figure out a workaround to delete a Box.

They say the magic number of items in a group for human memory is 6. That's probably sufficient for this too.

_________________
There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.
- C. A. R. Hoare


Top
 Profile  
 
 Post subject:
PostPosted: Mon Oct 22, 2007 10:59 am 
Offline
Member
Member

Joined: Thu Oct 21, 2004 11:00 pm
Posts: 248
Actually, I would guess the magic number to be 7. Why else would humans have invented so many groups of 7?

On the matter of the boxes, how exactly would one implement the type converters? How would one turn box operations into a generalized protocol for servers to implement (I think that "general protocol" bit is the Huge Lesson to learn from Plan 9, whatever protocol you end up liking)?

And would it have a performance impact that the byte-stream contents of each box are immutable?


Top
 Profile  
 
 Post subject:
PostPosted: Tue Oct 23, 2007 7:41 am 
Offline
Member
Member
User avatar

Joined: Thu Nov 16, 2006 12:01 pm
Posts: 7614
Location: Germany
Being in a hurry as all too often recently, I read only the first page. I got two things to say:

Colonel Kernel wrote:
The memory management issues that C and C++ force developers to address are a pain in the butt for most userland programming tasks. I tell you this from my 10 years of experience developing commercial software, mostly in C++.


Strange. I keep hearing about the "memory management issues of C and C++", but from my professional experience (which includes both Java and C++ assignments), they are no worse than keeping your event handlers properly unregistered in a Java/Swing application (a sure-fire way to have a memory leak in Java). We'd have to outlaw floating point maths, too, following your logic, because people keep forgetting it's not precise. While we're at it, outlaw integer maths too because divisions by zero may happen...

Doing a quick find / grep / wc on my office workstation, I get 257 appearances of "new" in 79968 lines of (multithreading) C++ code, most of them in the server-/client module and where we're wrapping an underlying C API. All the rest is done on the stack, i.e. no "management" issues at all.

I really do not mean to offend you, but most if not all "memory management issues" I came across in C++ are the result of bullsh*t (or missing) design of basic utility classes, most often perpetrated by former Java programmers who don't know how a computer works.

On topic: Everything is an $ABSTRACTION

My answer: Everything is not the same. Trying to squeeze disparate things into a common abstraction will give you headaches. Don't try to find "The One True Abstraction". Find a set of abstractions that is as small as possible, but no smaller.

_________________
Every good solution is obvious once you've found it.


Top
 Profile  
 
 Post subject:
PostPosted: Tue Oct 23, 2007 9:45 am 
Offline
Member
Member
User avatar

Joined: Tue Oct 17, 2006 6:06 pm
Posts: 1437
Location: Vancouver, BC, Canada
Solar wrote:
Being in a hurry as all too often recently, I read only the first page.


Such habits only get you into trouble. ;)

Quote:
Strange. I keep hearing about the "memory management issues of C and C++", but from my professional experience (which includes both Java and C++ assignments), they are no worse than keeping your event handlers properly unregistered in a Java/Swing application (a sure-fire way to have a memory leak in Java).


I'm not just talking about proper cleanup of resources, although that is an important consideration. Here are some of my thoughts on the Java/C++ comparison you bring up...

In Java, the case you mentioned is about the only case where something approximating a memory "leak" can occur (it is really "unintentional object retention", which is subtly different). Other than such cases of caching references, memory leaks are impossible. Also, there is a pre-canned solution to this problem: weak references. In C++, I can use smart pointers to solve a lot of the problems that GC solves in Java, but:
  • If I am porting my code to old and crappy platforms with old and crappy compilers, I won't be able to use Boost so I will probably have to write my own.
  • I've had to implement a lot of really nasty APIs that are defined in such a way that smart pointers don't help for the "large" objects, because their lifetimes are dictated by that API (they are not "shared until no one references them"). Using heap allocation for "small" objects is detrimental to performance (I expand on this later below).
  • In Java, the cost of using "new" is very, very low compared to using "new" in most C/C++ run-time environments. Smart pointers don't help to mitigate this.
  • GC is built into Java and programmers largely don't have to think about it. Even the lowliest C++ programmer who comes into contact with smart pointers needs to understand something of the mechanics of how they work.
Despite all this, I will grant you one thing: deterministic destruction in C++ is its greatest strength because it enables RAII. Java sorely lacks this (try/finally does not cut it). The "using" block in C# is sort of a half-step in the right direction. However I see no reason for being forced to use a mechanism like RAII for memory as well as non-memory resources. RAII is a great way to unlock mutexes, close files, close database connections, etc., but I hate having to use it all the time for memory.

Memory management issues go beyond this however. I'll give you one example I run into every time I try to design an object in a graph that points to "child" objects -- there is no way, using STL, to provide an iterator over those child objects that doesn't reveal the type of container being used. I realize that because of typedefs, this is not really a big deal, but it's an easy example to understand (I have more, but the context is deeply rooted in what I'm doing at work, which would be difficult and/or illegal for me to explain). The reason this doesn't work is because of the "slicing problem" -- even if there were an iterator base class, you couldn't return it by value because of the slicing problem. You can't return it by reference, because then the iteration state would be part of the parent object and you would lose the ability for multiple threads to iterate over the collection of child objects simultaneously.

So, why not allocate the iterator on the heap and return a pointer to it? Even if you use an auto_ptr or some other smart pointer, it's crazy to do this because heap allocation is slow (and, because of the lack of GC, deallocation is synchronous as well as slow). If we were talking about "big" objects that are created rarely and live a long time, then this wouldn't be a big deal, but something like an iterator is small and is expected to be used frequently. That is why iterators are returned by value in C++, but anything returned by value in C++ forces you to abandon polymorphism. All because we're using the same crappy heap allocation routines from 30 years ago that are optimized for allocating large arrays in C. Yes, I could write my own allocator, in theory. No, I don't have the time or budget to do so. This is why C++ is a PITA for me.

Quote:
We'd have to outlaw floating point maths, too, following your logic, because people keep forgetting it's not precise. While we're at it, outlaw integer maths too because divisions by zero may happen...


And how often to problems such as those occur, compared to some issue with performance, exception safety, scalability, or security caused by a memory management goof up?

Quote:
Doing a quick find / grep / wc on my office workstation, I get 257 appearances of "new" in 79968 lines of (multithreading) C++ code, most of them in the server-/client module and where we're wrapping an underlying C API. All the rest is done on the stack, i.e. no "management" issues at all.


You may be fortunate enough to develop in an environment where you have complete control over the threading and memory model. I spend most of my time writing libraries (database drivers) that must implement a crappy 15-year-old C interface that has a well-defined memory management model, but one that conflicts terribly with its threading model, which was bolted on as an afterthought. In my current project for example, shared pointers have only limited usage. Most of the shared objects in the library have their lifetimes managed by the application using the library, but because of the crazy hacked-on multithreading, we cannot assume anything about which application threads make calls to free those objects and when. Many of these objects have bi-directional links to each other, so it is difficult to break the links atomically. I have basically had to implement a domain-specific kind of transactional memory to make this work (my implementation is based on the idea of record locking in databases).

If you're thinking that all these problems are caused by a bad interface, you're right. However, I don't get to pick and choose which industry-standard APIs I have to implement. The reason I pound on C/C++ so much is more to do with bad interface design that pervades architectures rather than nitty-gritty coding issues. Like I said, if you're lucky enough to write applications or servers in C++ and you can control what platforms you have to support, your life will be way easier than mine.

Quote:
I really do not mean to offend you, but most if not all "memory management issues" I came across in C++ are the result of bullsh*t (or missing) design of basic utility classes, most often perpetrated by former Java programmers who don't know how a computer works.


You see, the sad fact of life that I've come to realize is that there are a lot of those bit-ignorant former Java programmers out there. It sucks, but it's the truth. I've tried to explain a lot of the conventional C++ wisdom to such people -- the kind of thing that you and I take for granted every day. It's hard, because there is just so much background required to understand why things in the language are done the way they are. Unfortunately, we can't rely on the expertise of developers because there just aren't enough "expert" developers.

In summary, why should I have to re-invent the wheel all the time just to provide a certain level of abstraction so that people can think about the problem domain instead of memory management performance issues, when the language, libraries, and run-time environment can be doing that for me? (Yes, there are C++ libraries. No, they are not nearly as portable as you think they are.)

_________________
Top three reasons why my OS project died:
  1. Too much overtime at work
  2. Got married
  3. My brain got stuck in an infinite loop while trying to design the memory manager
Don't let this happen to you!


Top
 Profile  
 
 Post subject:
PostPosted: Tue Oct 23, 2007 10:31 am 
Offline
Member
Member
User avatar

Joined: Thu Nov 16, 2006 12:01 pm
Posts: 7614
Location: Germany
Colonel Kernel wrote:
I spend most of my time writing libraries (database drivers) that must implement a crappy 15-year-old C interface...


...which you couldn't wrap any better in Java either. ;)

Quote:
You see, the sad fact of life that I've come to realize is that there are a lot of those bit-ignorant former Java programmers out there.


Call me a stubborn bastard, but I refuse to give in to Java lore just because there are a lot of them, just the same as I refuse to give in to Windows / Linux lore just because there are a lot of those machines around.

And besides, originally we talked about OS design, and honestly - if a developer cannot work correctly unless he's got a "safe" language runtime and GC to take him by the hand, I don't want him near OS code anyway. But that's just MO. ;)

_________________
Every good solution is obvious once you've found it.


Top
 Profile  
 
 Post subject:
PostPosted: Tue Oct 23, 2007 3:15 pm 
Offline
Member
Member
User avatar

Joined: Tue Oct 17, 2006 6:06 pm
Posts: 1437
Location: Vancouver, BC, Canada
Solar wrote:
Colonel Kernel wrote:
I spend most of my time writing libraries (database drivers) that must implement a crappy 15-year-old C interface...


...which you couldn't wrap any better in Java either. ;)


True... I live on the cutting edge of deprecated technology. :P If there were a better alternative, perhaps such cruft would eventually disappear...

Quote:
Call me a stubborn bastard, but I refuse to give in to Java lore just because there are a lot of them, just the same as I refuse to give in to Windows / Linux lore just because there are a lot of those machines around.


Ok, you're a stubborn bastard. ;)

As an aside, I'm not saying Java is the greatest (actually I think it sucks on many levels) -- I just don't think things can keep going on the way they have been for much longer.

Quote:
And besides, originally we talked about OS design, and honestly - if a developer cannot work correctly unless he's got a "safe" language runtime and GC to take him by the hand, I don't want him near OS code anyway. But that's just MO. ;)


Good point, which is why (in some other thread... I'm beginning to lose track) I said that a type-safe systems programming language is probably not as cost-effective for something small like a kernel as good peer-review and hiring smart kernel developers. However, this pre-supposes a very small kernel that is tractable enough to review.

_________________
Top three reasons why my OS project died:
  1. Too much overtime at work
  2. Got married
  3. My brain got stuck in an infinite loop while trying to design the memory manager
Don't let this happen to you!


Top
 Profile  
 
 Post subject:
PostPosted: Tue Oct 23, 2007 7:46 pm 
Offline
Member
Member

Joined: Thu Oct 21, 2004 11:00 pm
Posts: 248
And a small kernel is best facilitated by extensibility, which is best facilitated by a common language, protocol, or set of abstractions that both the kernel and user-space can use as first-class citizens.

Which brings us back to our original subject.


Top
 Profile  
 
 Post subject:
PostPosted: Wed Oct 24, 2007 5:16 am 
Offline
Member
Member

Joined: Thu Aug 30, 2007 9:09 pm
Posts: 102
Quote:
And a small kernel is best facilitated by extensibility


I tend to disagree. I think that a small kernel is best facilitated by re-evaluating what a kernel needs to do.

Too many people are putting graphics, keyboard, cpu, memory driver, ports security, disk driver, bios interaction, pci, initialization code, and all the rest in one freakin' program. The problem is that we're extending our reach too far to do any one thing well.

A modern computing system needs many parts. Trying to write all of them before you can call it a *something* is a failure of our community. We need to reduce our demands from developing Operating Systems, to develop schedulers, memory managers, etc. that can inter-operate.

You cannot reduce the primitives below the hardware interaction. The good thing is that all of the hardware is interacted with either by a port or by a memory region. The other good thing (if you care about security) is that the security program can now reside below that and untouched by the other parts.

_________________
There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.
- C. A. R. Hoare


Top
 Profile  
 
 Post subject:
PostPosted: Wed Oct 24, 2007 1:00 pm 
Offline
Member
Member

Joined: Thu Oct 21, 2004 11:00 pm
Posts: 248
That's what I mean. You make the operating system as a whole extensible in such a way that operating-system extensions can be placed outside the kernel.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 51 posts ]  Go to page Previous  1, 2, 3, 4  Next

All times are UTC - 6 hours


Who is online

Users browsing this forum: No registered users and 45 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group