OSDev.org

The Place to Start for Operating System Developers
It is currently Thu Mar 28, 2024 9:38 am

All times are UTC - 6 hours




Post new topic Reply to topic  [ 79 posts ]  Go to page Previous  1, 2, 3, 4, 5, 6  Next
Author Message
 Post subject: Re: Describe your dream OS
PostPosted: Thu Jun 20, 2019 4:03 pm 
Offline
Member
Member

Joined: Mon Feb 02, 2015 7:11 pm
Posts: 898
bzt wrote:
(...) if it sounded like condescending, that's only because I felt attacked defending the truth.


No one is attacking you... You might want to spend some time pondering why you feel that way.

bzt wrote:
Frankly I don't think OSDev forum is the place for explaining basic programming techniques, but I apologize if I sounded insulting.


I agree. But then why do you spend so much energy explaining basic programming techniques? :)

No hard feeling, but I don't think it serves anyone to have these very long thread where people basically repeat the same arguments over and over using slightly different words.

Overall I do appreciate what you contribute to this forum... But perhaps you don't need to convince everyone of everything. Less experiences programmers might not understand what you are talking about and being aggressive about it is not helping anyone, including you.

_________________
https://github.com/kiznit/rainbow-os


Top
 Profile  
 
 Post subject: Re: Describe your dream OS
PostPosted: Thu Jun 20, 2019 8:46 pm 
Offline
Member
Member
User avatar

Joined: Thu Oct 13, 2016 4:55 pm
Posts: 1584
kzinti wrote:
You might want to spend some time pondering why you feel that way.
No need, I know exactly why they are attacking me, started in the font rendering library topic.
kzinti wrote:
I agree. But then why do you spend so much energy explaining basic programming techniques? :)
As I have already stated, because I'd like to think there's hope for nullplan to learn. The question is, why don't you help him too? (Please don't take this offensive in any way.)
kzinti wrote:
No hard feeling, but I don't think it serves anyone to have these very long thread where people basically repeat the same arguments over and over using slightly different words.
Agreed. That's why I stopped answering Octocontrabass.
kzinti wrote:
Overall I do appreciate what you contribute to this forum... But perhaps you don't need to convince everyone of everything. Less experiences programmers might not understand what you are talking about and being aggressive about it is not helping anyone, including you.
You're right. It's just it is in my nature to help people to understand things. That was always my nature. I just got more passionate about it in the last 5 years, since I'm surrounded by only increadibly misguided and stupid scientologist. It hurts to see how hopeless, lost and brainwashed they are. You'd be the same if you were in my shoes. (I'm not suggesting that nullplan is a scientologist, just saying he's clearly misguided too.)

Now after this little intermezzo, I hope we can go back to the original topic. :-)

Cheers,
bzt


Top
 Profile  
 
 Post subject: Re: Describe your dream OS
PostPosted: Thu Jun 20, 2019 11:41 pm 
Offline
Member
Member

Joined: Wed Aug 30, 2017 8:24 am
Posts: 1593
iansjack wrote:
The idea that dynamic libraries lead to a larger memory footprint is plainly ridiculous.
If you have a library that is used by only one process in the system, then the shared library does eat up more memory, due to PIC overhead, PLT, GOT, and the fact that the linker can't throw the unused object files away (like it can with static linking).

If the library is used by more than one process with different programs, then dynamic libraries can be a memory benefit if the shared amount of memory is more than the overhead. Which, for small libraries, isn't a given.

iansjack wrote:
Just imagine if every program that used the standard C library were statically linked.
You mean "imagine http://sta.li? Anyway, while the standard C library is an oft-used one, next to no-one uses all of it. In fact, there are parts of it almost nobody uses. But with dynamic linking, you get to load functions like nftw() and hsearch() into memory regardless.

Speaking of the C library, musl often uses weak linking tricks to get the footprint in statically linked programs down (e.g. you don't get the function that runs all atexit() handlers if you don't use atexit()). Well, with dynamic linking, those tricks don't work. You always get the atext() stuff with it.

bzt wrote:
You still don't understand how dynamic libraries work and why they are used.
I'm pretty sure I know how dynamic libraries work in a larger amount of detail and for more architectures than most on this forum. As to why they are used: Because everyone else is doing it. Because it's the hip new thing, at least compared to static linking.

No, the idea sounds alluring: You get to save on memory for code, by only loading reused library functions once. It sounds so nice. Unfortunately, it isn't quite so simple, for all the reasons already laid out. Yes, you only load the functions once, but you also get to load all the other functions in the library once. You get to do it with larger code and with structures needed to make the whole thing work.

bzt wrote:
Btw, advantages and disadvantages is not a matter of a subjective opinion. It's a matter of measurements, and comparition of objective test results.
As soon as multiple dimensions get involved, you get to arbitrarily decide, which of these is more important in precisely what degree. Don't tell me you never heard of a tradeoff before. If I use the lookup-table version of the CRC32 algorithm, then I am expending 1 kB of additional memory, but speed the algorithm up eightfold. Dynamic libraries are another space-time tradeoff. Only in this case, the tradeoff is not so clear (there are both negative and positive aspects to dynamic linking in both the space and the time domain, and whether the positive or the negative side will win out is a matter of the precise test case)

Measurements are complicated because they depend on so many environmental factors. Are the directories that are searched in the page cache? Are the libraries and executables? How many paths are there in LD_LIBRARY_PATH? How fast is the hard drive in question? And how fast is the memory? What system call mechanism is in use? How fast is mmap()? How fast is the page fault handler? How much of the library does your test program use?

All of these determine large parts of which version is faster.

But since a verdict on which is better is so contingent on the environment, no blanket statement like "static libs are better" or "shared libs are better" can be correct. It can only be "Shared libs are better in this case for that machine under these conditions". But most people don't want that, then they'd have to re-evaluate their decision every single time they have to make it. Which is hard, so nobody does it. So blanket rules it is, and since dynamic linking is the hot new thing, that's what everyone is going with.

bzt wrote:
If what you're saying were true, nobody would have implemented DLLs or SOs.


bzt wrote:
But here's the news, they have (check out Win, Linux, MacOSX, BSDs, literally ANY mainstream OS, and BeOS, Haiku, Plan9, Ultrix, VMS, Solaris, ReactOS, just the name a few non-mainstream, and all successful hobby OSes if you don't believe me).
Argumentum ad populum (Also known as the "million flies" argument, which you can google at your discretion). By that logic, we should use Windows (just look at how many people use that). Also ELF shared libraries are light on the kernel, merely requiring support for PT_INTERP, and the rest is in user space.

Interesting that you should list Plan 9 there. Because the people behind it have in the past said they only included it because people expect it now. Here's a more thorough documentation of this: http://harmful.cat-v.org/software/dynamic-linking/

Interesting too that you should list Windows there. Ever since Windows Vista, there is a directory in the windows directory called "winsxs". Look at it. Take it all in. Gigabytes upon gigabytes of tiny little DLLs, most of these used by only a single program in the system. Open up the list of installed programs, and have a gander at all the versions of "Microsoft Visual C++ Redistributable" you have installed.

While many things are open to the environment, a shared lib used only by a single program is always a loss, in both the space and time departments. And Windows makes very much sure (of late) that most DLLs will only be used by a single program.

bzt wrote:
Now after this little intermezzo, I hope we can go back to the original topic.
Yes, we should. This was my last reply on the matter.

_________________
Carpe diem!


Top
 Profile  
 
 Post subject: Re: Describe your dream OS
PostPosted: Fri Jun 21, 2019 1:22 am 
Offline
Member
Member
User avatar

Joined: Sat Mar 31, 2012 3:07 am
Posts: 4591
Location: Chichester, UK
nullplan wrote:
You mean "imagine http://sta.li?

I should perhaps have made it clear that I was talking about full-featured OSs rather than minimal implementations. I can see that static linking has advantages in very restricted situations.


Top
 Profile  
 
 Post subject: Re: Describe your dream OS
PostPosted: Fri Jun 21, 2019 4:54 pm 
Offline
Member
Member

Joined: Thu May 17, 2007 1:27 pm
Posts: 999
The most convincing advantage of shared libraries is that that they are much more easy to distribute. Yes, DLL hell is a well-known concept but it is still easier to deal with than relinking (and possibly even recompiling, since most build systems are not very flexible w.r.t. dependency changes) the entire system when the C library changes (e.g., due to a security fix). This is already clearly visible on smallish hobby OSes like managarm, which only builds around 50 packages. For distributions with 10000s of packages, it is of tremendous importance -- they just cannot push out updates fast enough without it.

The speed difference is negligible (well, maybe it isn't if you call through the PLT in a tight loop) and the space difference is negligible for most programs (but for some, it is huge: a statically linked LLVM installation with debugging info is >25GiB and linking against a static LLVM lib takes minutes for GNU ld unless your machine is very fast).

_________________
managarm: Microkernel-based OS capable of running a Wayland desktop (Discord: https://discord.gg/7WB6Ur3). My OS-dev projects: [mlibc: Portable C library for managarm, qword, Linux, Sigma, ...] [LAI: AML interpreter] [xbstrap: Build system for OS distributions].


Top
 Profile  
 
 Post subject: Re: Describe your dream OS
PostPosted: Sun Jun 23, 2019 4:05 pm 
Offline
Member
Member
User avatar

Joined: Fri Feb 17, 2017 4:01 pm
Posts: 640
Location: Ukraine, Bachmut
Is it possible with ELF to export from non PIC executable, not shared object?

_________________
ANT - NT-like OS for x64 and arm64.
efify - UEFI for a couple of boards (mips and arm). suspended due to lost of all the target park boards (russians destroyed our town).


Top
 Profile  
 
 Post subject: Re: Describe your dream OS
PostPosted: Tue Jun 25, 2019 8:36 pm 
Offline
Member
Member
User avatar

Joined: Mon May 22, 2017 5:56 am
Posts: 812
Location: Hyperspace
*sigh* Once again my PTSD causes a massive Internet flame war. I'm not sure whether to :cry: or :twisted: honestly! :lol:

I remember now, it was the organizational aspects which got shared libraries into my bad books. Back when I was actively administering Linux installations, shared library mismatches were seriously not fun. I burned out on it pretty hard. It was a long time ago now.

A note regarding Plan 9: I was quite heavily involved with it for over 10 years, and... maybe once heard something about a dynamic linker for it. Dynamic linking is not used for any part of the normal system, and I've never seen anything about it in the distributed files, including manuals and papers. I imagine the dynamic linker was gone long before I started using it, probably before it was open-sourced in 2000 or so.

As for adminning Plan 9... meh, I've had ups and downs. Compiling the entire system takes less than an hour and is usually successful. Problems are often trivial, but I guess they do bring everything to a halt while they're fixed. Wait, no... you can get a previous version of the offending binary out of "the dump", and carry on with that. Hmm... I can recall one instance where that wouldn't have worked, but AFAIR it wasn't needed. I guess it's much like finding the right library version for some old program. Heh, and just like that one instance, an old library version won't help much if a kernel interface has changed. Anyway, I think the real reason the problems are relatively light is because the entire system is one surprisingly small package. This is, of course, also very limiting, which has something to do with why I barely use it any more. Possibly related: those guys measure success differently, especially der 9fronten. ;)

You know what? My actual OS dream for a long time was for Plan 9 to be everything I wanted. I did a lot with it, but in certain crucial areas I couldn't stretch either it or my comfort zone far enough for the two to overlap. The worst part is the security, which is intentionally badly documented and affects almost everything else because Plan 9 is all about the network. I had no access to the dump in one installation, and I still don't know why.

_________________
Kaph — a modular OS intended to be easy and fun to administer and code for.
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie


Top
 Profile  
 
 Post subject: Re: Describe your dream OS
PostPosted: Mon Jul 08, 2019 3:38 pm 
Offline
Member
Member
User avatar

Joined: Mon Jun 05, 2006 11:00 pm
Posts: 2293
Location: USA (and Australia)
bzt wrote:
eekee wrote:
Static linking was the touchstone which made me consider all this. I concluded I'm not opposed to static linking. It makes program execution much simpler and faster.
And I disagree again :-) Consider this: there's a huge shared library to support the GUI (like Qt), let's say for the sake of the argument, it's 256M. There are two applications, A and B, each 1M. With static linking, your OS would need to load 514M into ram. With shared linking, you only load the lib once, when you start A (same speed as with static linking). Now when you start B, you only need to load an additional 1M, because the 256M lib is already in memory. Thus, loading 258M in total is much much faster, and dynamic linking beats static linking in program execution time. The time required by the dynamic linker to patch GOT entries is insignificant compared to accessing and reading sectors. It is more complex than static linking (which has absolutely no run-time requirement at all), but it's still relatively simple (applying some relocation offsets is not a rocket science).

Also maintenance is impossible with single binaries. If you install a fix for a shared library (like fixing a buffer overflow in jpeg comments), all programs using that library will benefit from the fix at once. With static linking you would have to recompile and reinstall ALL applications.

The only downside with shared libs is that developers and maintainers are incompetent, and they often can't guarantee backward API compatibility within a main version. This can be solved by allowing different versions of the same library to be installed at the same time (like Gentoo does), or by rejecting libraries from the repo that breaks compatibility (like Debian tries to do things with it's release cycles).


If you're shipping a statically linked binary, even a simple form of dead-code elimination should eliminate most of that 256MB.

I agree that a completely statically linked world isn't good, because imagine if the OS updates the UI framework and different programs used UI framework v1, UI framework v2, UI framework v3, etc. By dynamically linking the UI library, you can do things like add screen reader support to your UI framework, then all programs using your UI framework automatically get screen reader support.

Modern operating systems have package managers or app stores. You could use this for hosting libraries too, and when you go to install or update a program, it links the program together.

(Or - your package manager server could do this - if you upload a new version of a library, it attempts to rebuild all dependencies?)

_________________
My OS is Perception.


Top
 Profile  
 
 Post subject: Re: Describe your dream OS
PostPosted: Mon Jul 08, 2019 3:55 pm 
Offline
Member
Member
User avatar

Joined: Mon Jun 05, 2006 11:00 pm
Posts: 2293
Location: USA (and Australia)
My dream OS would be near instant. A complaint I have against Windows is it is always preparing. "Preparing to uninstall" "Preparing to shut down" "Preparing to copy".

For example, shutting down:
Prompt for unsaved data, make sure the disk/network is flushed. Then turn off the power. Other than the UI prompt, there's no reason this should take more than a second, unless your disk driver has a gigabyte of data cached.

Starting up:
Load a minimal set of drivers to show me a desktop where I can choose what program to launch. The rest can be lazily loaded (or even background loaded if I explicitly let it).

Uninstalling/copying:
Just start deleting/copying. Don't try to estimate before hand. I'd rather see "deleted 105MB.." than "deleted 105MB/2GB" if it means saving 20 minutes "calculating".

Other things:

Applications should be self contained. Everything (the binary, settings, etc.) in one directory. I get shared libraries, but a good package manager/OS should reference count shared libraries and uninstall them if when nothing depends on them. There should be no dangling config files anywhere. If I uninstall a program my device should be in the same state as if I never installed the program.

There's a lot we can learn from mobile operating systems, such as Permissions. I like prompts such as "Do you want to give Camping Handbook access to your location?" With "Do you want to give Super Duper Magic Camera access to your camera and photos?" the user should have the granularity to pick which permissions they will give access to (with programs that gracefully handle being denied permission.)

_________________
My OS is Perception.


Top
 Profile  
 
 Post subject: Re: Describe your dream OS
PostPosted: Mon Jul 08, 2019 5:21 pm 
Offline
Member
Member
User avatar

Joined: Mon May 22, 2017 5:56 am
Posts: 812
Location: Hyperspace
I started another thread for complaining about dynamic linking, but in brief:

MessiahAndrw wrote:
I agree that a completely statically linked world isn't good, because imagine if the OS updates the UI framework and different programs used UI framework v1, UI framework v2, UI framework v3, etc.

Uh... I don't think you thought that through. Different major versions will have different APIs and thus different libraries. If the differences are minor, they shouldn't affect existing binaries, but sometimes they will, causing maintenance emergencies on updates.

MessiahAndrw wrote:
By dynamically linking the UI library, you can do things like add screen reader support to your UI framework, then all programs using your UI framework automatically get screen reader support.

Hmm... screen reader support mostly seems sufficiently isolated from program function except for when it wants the program to scroll -- which it would have to if the program only passes the visible portion of the text to the library to display. Depending on library API design, there may be no way for the screen reader to know which scrollbar goes with the displayed text. You can't fix API design by changing the library alone!

My dream OS will not ever have maintenance emergencies caused by upgrades to one part affecting another. No, actually, my dream OS will not ever have maintenance emergencies! :D My dream OS will not ever exist, but eliminating shared libraries is a step towards it.

_________________
Kaph — a modular OS intended to be easy and fun to administer and code for.
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie


Top
 Profile  
 
 Post subject: Re: Describe your dream OS
PostPosted: Tue Jul 09, 2019 2:14 am 
Offline
Member
Member

Joined: Mon Jul 25, 2016 6:54 pm
Posts: 223
Location: Adelaide, Australia
eekee wrote:
I started another thread for complaining about dynamic linking, but in brief:

MessiahAndrw wrote:
I agree that a completely statically linked world isn't good, because imagine if the OS updates the UI framework and different programs used UI framework v1, UI framework v2, UI framework v3, etc.

Uh... I don't think you thought that through. Different major versions will have different APIs and thus different libraries. If the differences are minor, they shouldn't affect existing binaries, but sometimes they will, causing maintenance emergencies on updates.

I think you're underestimating the advantages of being able to automatically pick up fixes in later minor versions of libraries. A minor version update could easily have an identical api but important security fixes in the implementation.


Top
 Profile  
 
 Post subject: Re: Describe your dream OS
PostPosted: Sat Aug 24, 2019 1:15 am 
Offline
Member
Member

Joined: Tue Jan 02, 2018 12:53 am
Posts: 51
Location: Australia
An often overlooked fact is that an ideal OS design can also require an ideal hardware design. For example, if the hardware lacks certain features or enforces undesirable ones, it may be impossible to implement your ideal OS on top of it. My dream OS requires the system to have non-volatile RAM, which greatly simplifies almost all aspects of the kernel. Simplicity and performance are the two primary goals of the OS. Some of its key features are:

1. Monolithic/exokernel hybrid.
2. A flat file system. No directories, just a flat, linear list of all files on the system. A file, by the way, is just a named region of memory.
3. Fixed-priority pre-emptive scheduling.
4. Single address space. All addresses are physical, so pointers and data structures can be directly and trivially shared between different tasks.


Top
 Profile  
 
 Post subject: Re: Describe your dream OS
PostPosted: Sun Aug 25, 2019 7:09 am 
Offline
Member
Member
User avatar

Joined: Mon May 22, 2017 5:56 am
Posts: 812
Location: Hyperspace
Qbyte wrote:
An often overlooked fact is that an ideal OS design can also require an ideal hardware design. ... My dream OS requires the system to have non-volatile RAM

That reminds me of something. :) Suspending laptops, keeping the RAM powered, has been possible since the mid or maybe early 90s. I once had a 286 laptop which was very nice that way, instant suspend, instant wake. Later, I had a 2001 iBook which also had no trouble suspending, although it was a little slower to wake up. I don't recall the quality of Windows support for suspending in those days, but Linux had a lot of trouble in the early 00s because of hardware issues. Some drivers could successfully restore device state after suspend, others couldn't. Some hardware could be shut all the way down, then brought back up as if power-cycled, but on other hardware, even this wasn't possible. (I wonder how that hardware behaved on reboot!) It's easier if you can treat resuming as like power-on, but modern hardware takes time to initialize at boot.

I've always liked nvram architectures. Around 1990, I planned to make one; just an 8-bit with two 32KB SRAM chips, battery backed. No ROM, one of the SRAMs would have been detachable with its battery to be programmed externally, and the write line gated to write-protect a portion of it. I never built it, which is just as well; I didn't know enough to do it right.

These days, I'm thinking I want to make RAM just a cache for the disk, partly to achieve the nvram effect. With the CPU L1 caches caching the L2 cache, the L2 cache caching the RAM, RAM caching the SSD, SSD caching the HDD, and the HDD caching cloud storage, "it's [caches] all the way down!" ;) That order wouldn't be fixed, the SSD could cache cloud or LAN storage, LAN/NAS could cache cloud, or whatever. What scares me, though, is Rob Pike's statement, "there are no simple cache bugs." Pike has some experience with caching, having created a text editor with 5 levels of caching within itself. He worked in a lab which developed and used a cached WORM filesystem; HDD caching an optical jukebox, so I'm sure he knew about its bugs too.

You know, putting together the various conclusions I've come to while thinking about this, I'm not sure non-volatility itself simplifies much of anything, but related things do. APLX (perhaps any APL) stores data simply despite having two irritating layers of volatility. (I add the epithet because I once spent a whole day writing up plans in a variable, then woke up the next day to find the computer had rebooted overnight, and I'd forgotten to do one or both of saving the variable to the workspace or the workspace to the disk! :roll: :lol:) I bring it up because, despite this double-volatile insanity, it's simple but powerful. You can edit variables directly, entering as much or as little data as needed or wanted. All the variables functions and classes are stored to disk together, as part of the workspace image; there's no filesystem separate from them. Well, there is, to hold the different workspaces, but each workspace is like an installation of a simple but powerful operating system. Where are the programs? APLX's tutorial introduces functions by calling them programs. :D It was mind-blowing to me, but it's quite correct: type a function name, stuff happens. It's indistinguishable from a program in any command-line system, therefore it is a program. :D Functions interact with each other much like Unix programs or Plan 9 filesystems, but very much unlike those two revered operating systems, APL makes it fairly easy to structure the data passed between them. (It could be easier still. I have plans to make it easier in my OS.)

My point is simplification doesn't require hardware non-volatility. If APLX auto-saved its workspace, and also saved editor state so edits aren't lost, it would have the benefits of a non-volatile system. It means the software must be a little more complex, but with careful design it needn't be too much more. For instance, saving editor state could just use the existing mechanism for saving variable state. IO devices could still be a problem, but if all your state is saved to disk you can just make resume the same as boot-up every time, which is another simplification. It might be slower than desired, but would be quicker than a mainstream OS boot which involves starting dozens of programs afresh. (Stupid design, that.) It would mean IO facilities may suddenly disappear or even appear, but in reality, they do that anyway. The network going down is the classic example, and hotplugging is a thing which should not be ignored.

_________________
Kaph — a modular OS intended to be easy and fun to administer and code for.
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie


Top
 Profile  
 
 Post subject: Re: Describe your dream OS
PostPosted: Mon Aug 26, 2019 7:27 am 
Offline
Member
Member

Joined: Tue Jan 02, 2018 12:53 am
Posts: 51
Location: Australia
eekee wrote:
You know, putting together the various conclusions I've come to while thinking about this, I'm not sure non-volatility itself simplifies much of anything...

NVRAM would greatly simplify things for multiple reasons. The first is that there would never be any need to manage two separate storage mediums and copy data between them. All data, both temporary and persistent, would be stored in a single, unified store. The second reason is that all code and data can be executed or accessed in-place; there is never any need to load programs or map data and there is no need to periodically back up data in case a power outage or other fault occurs. The third reason is that current non-volatile storage mediums have a number of severe drawbacks which have to be taken into account when designing a kernel. HDD's are notoriously slow and require seek times to be factored in when storing data which requires more complex storage algorithms, whereas with NVRAM, access times are uniform regardless of where the data is located. SSD's have non-linear read/write relations, with reading being much faster than writing and having a limited number of write cycles. NVRAM would just do away with all of that complexity and would be located on the CPU itself instead of being external.

To me, it's crazy that microcontrollers have been around for over 50 years, but we still don't have a fully integrated desktop SoC. There's no need to have motherboards and standalone components anymore, the IBM PC era is over and the majority of consumers no longer upgrade their PC's after a number of years, but opt to replace it entirely. Indeed, most people don't even have a desktop anymore, but have laptops or tablets which simply aren't designed to have their parts replaced or upgraded, which makes the current way of doing things even more asinine.


Top
 Profile  
 
 Post subject: Re: Describe your dream OS
PostPosted: Wed Aug 28, 2019 7:41 am 
Offline
Member
Member

Joined: Tue Jan 02, 2018 12:53 am
Posts: 51
Location: Australia
Another thing I don't like about existing operating systems is that they mandate a program format, such as an ELF/COFF header in the case of *nix or a PE header in the case of windows. I'm usually a fan of "mechanism, not policy", so I'd prefer programs to have no header at all, but instead simply have the entry point of a program be the base address of the file. The program itself can then make various system calls during initialization in order to achieve what the header would normally do, if desired. This is better because it eliminates the distinction between programs and functions and it also makes programs simpler because most of the time a program won't need to make any system calls and will be content with the default scheme of having the whole program image instantiated as a single WRX segment.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 79 posts ]  Go to page Previous  1, 2, 3, 4, 5, 6  Next

All times are UTC - 6 hours


Who is online

Users browsing this forum: No registered users and 38 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group