OSDev.org

The Place to Start for Operating System Developers
It is currently Tue Jul 23, 2019 10:18 pm

All times are UTC - 6 hours




Post new topic Reply to topic  [ 14 posts ] 
Author Message
 Post subject: Programs that truly need paging?
PostPosted: Mon Jul 08, 2019 11:35 am 
Offline
Member
Member
User avatar

Joined: Tue Mar 06, 2007 11:17 am
Posts: 1130
What sort of programs really need paging? I am trying to find a program to test my paging/physical memory management library thoroughly but I see that any program that comes to mind doesn't really need paging or even malloc/free to exist.

So I have no way to truly test my paging code with a program that reflects a true need of paging, that makes evident where and why it needs paging at each part of the code.

--------------------------------------------------------------
--------------------------------------------------------------
--------------------------------------------------------------
--------------------------------------------------------------
It seems that I can do anything without paging. I can reserve disk/string buffers statically and efficiently without malloc/free. I could encode/decode Base64 or even GIF images only with hard-coded buffers.

I can read a chunk from a non-fragmented XML-like file partially, render partially too with a few static buffers in a way that it could even be faster because it would never become more fragmented than when it was loaded (if it was an externally loaded program).

When I get a minimal file system, I could create files as cache instead of huge memory blocks, and it would make sense for example for caching the massive live data of a web browser that is offscreen, SQLite3, etc., instead of allocating indiscriminately.

Even a compiler would make better use of cache files than allocation from limited memory.

--------------------------------------------------------------
--------------------------------------------------------------
--------------------------------------------------------------
--------------------------------------------------------------

The only thing that I can think of that would need paging is something like a formal 3D game to render data in memory as fast as possible, but it would probably be enough with malloc/free/realloc in a physical or flat page directory.

Loading programs for simultaneously loaded tasks is the only thing that I think that could need paging, but it could only be reached once everything else is done in the system.

_________________
http://api.archefire.org (My OS compatible with DOS)

IP for hosts file (all domains):
190.53.3.113 archefire.org (udocproject@yahoo.com)


Top
 Profile  
 
 Post subject: Re: Programs that truly need paging?
PostPosted: Mon Jul 08, 2019 1:07 pm 
Offline
Member
Member
User avatar

Joined: Sat Mar 31, 2012 3:07 am
Posts: 3396
Location: Chichester, UK
Paging, as you seem to describe it (i.e. using disk space as RAM), is very rarely needed nowadays, with the large amount of RAM available. Certainly it's unlikely to be used by individual programs, and I doubt that it is a high priority for most people developing a hobby OS. If you can't think of any way to test it, why bother to implement it?

This, of course, is completely differendt from the memory management provided by modern CPUs, which is really a completely different topic. You definitely want to implement that, but it is something that is always in use, so the idea of testing it with individual programs doesn't make sense.


Top
 Profile  
 
 Post subject: Re: Programs that truly need paging?
PostPosted: Mon Jul 08, 2019 2:01 pm 
Offline
Member
Member
User avatar

Joined: Mon May 22, 2017 5:56 am
Posts: 108
I agree with iansjack. In Plan 9, which was developed for 90s hardware, paging was broken for many years, but experienced users didn't care. Only newbies expected to have it because they came from Linux and it was normal to set up swap in Linux. It got fixed just for the satisfaction of fixing things back when cinap_lenrek could work on it all day every day.

With heavier use, multitasking makes it impossible to predict how much memory you need in advance because no program (not even the kernel) can see what memory other programs will require in the future, ;) not even 1/10th of a second in the future. Paging is a safety net for those moments when demand for RAM exceeds supply.


Top
 Profile  
 
 Post subject: Re: Programs that truly need paging?
PostPosted: Mon Jul 08, 2019 3:10 pm 
Offline
Member
Member

Joined: Thu May 17, 2007 1:27 pm
Posts: 592
Building LLVM on managarm's CI server routinely exhausts all physical RAM -- at least for short amounts of time. Swapping actually provides a huge speedup here, as the alternative would be reducing the number of concurrent compilations: the latter would be slow 100% of the time, while swapping is only slow 1% of the time. Buying a server with more RAM is very expensive.

_________________
managarm: A microkernel-based OS that is capable of running a Wayland desktop


Top
 Profile  
 
 Post subject: Re: Programs that truly need paging?
PostPosted: Tue Jul 09, 2019 4:18 am 
Offline

Joined: Sat Jun 08, 2019 4:36 am
Posts: 2
Perhaps you can implement mmap(2). It's very common in data processing for a few reasons/cases:

  1. Avoiding all the file I/O complexity
  2. Avoiding big data copying across user spaces
  3. Using the kernel filesystem/page caches outside the process
  4. Mapping datasets larger than available RAM
  5. Using the Kernel's paging mechanism to manage page fault/loading/eviction
  6. Sharing memory across processes
  7. Instant loading of large files
  8. Loading data on demand

Of course, it has a few drawbacks, too. I guess implementing support of mmap with user threads needs care (e.g. not blocking all threads when 1 thread dereferences a missing page). But I only know this from a user's perspective.


Top
 Profile  
 
 Post subject: Re: Programs that truly need paging?
PostPosted: Tue Jul 09, 2019 4:22 am 
Offline
Member
Member
User avatar

Joined: Thu Oct 13, 2016 4:55 pm
Posts: 326
Hi,

What you have described is more like swapping than paging.

Paging is useful to separate processes from each other. Also for protection. To test it, you should
- write a malicious program that places and executes code on the stack (with paging you can avoid this kind of attack)
- write a malicious program that modifies the memory of another process which is running with full privileges
- write a malicious program that replaces function addresses in GOT of another process installing a key logger hook for getc() for example
- think about running two DOS emulators in parallel, where both emulated programs writes the memory 0xB8000 directly. With paging you can create separate windows for them, instead of letting the to tasks messing up each others screen.

To test swap, you should write programs like:
- a compiler. Cache files are not good, because you'll need to access all symbols at once for tokenization for example
- a picture manipulation tool. Think about opening a large JPEG file (like a background image), then modifying it with reversible, constantly growing history
- an adaptive, dictionary based compressor, like zlib. You have to dynamically allocate memory for the Huffmann-tree as you read the stream, there's no other way
- any script interpreter, where the script's memory must be allocated dynamically because you don't know the memory requirement in advance (if you do, then it must be a compiled language, not an interpreted one). For example see Lua with lots and lots of dynamically allocated tables.

Cheers,
bzt


Top
 Profile  
 
 Post subject: Re: Programs that truly need paging?
PostPosted: Tue Jul 09, 2019 7:45 am 
Offline
Member
Member
User avatar

Joined: Mon May 22, 2017 5:56 am
Posts: 108
bzt wrote:
What you have described is more like swapping than paging.

Paging is useful to separate processes from each other.

I was thinking "swapping", but I used the word "paging" because my more experienced friends use it for the action of swapping. I don't entirely know why. The phrase "paging out" is used, or occasionally "paging to disk". I'm actually happier using "swapping", because doesn't swap require page tables anyway? But I'm confused and inexperienced in this area. Hmm... "virtual memory" describes the feature without worrying about the implementation, but it's harder to apply tenses to and can't really be verbed.

bzt wrote:
- a picture manipulation tool. Think about opening a large JPEG file (like a background image), then modifying it with reversible, constantly growing history

I've got a bit of experience with image manipulation. I think in this one case the history is normally written straight to disk because it's so huge, but there are other issues. Original artwork at print quality will be a minimum of 600 dots per inch and can be much larger than 8x11 inches. (Sometimes it's 2000 DPI!) How do you allocate memory for that, statically or dynamically? It's a job which requires expertise. :) Or, maybe mmap is the tool for the job. It'll take me a while to think this through properly, but I strongly suspect mmap would massively simplify image processing. mmap of course requires paging.

Overall, I think it's always possible to find a way to do every task with static buffers because ultimately, nothing in software is actually impossible, but to go without OS support will be harder, sometimes much harder. Doing modern tasks with old techniques requires a great deal of expertise and/or time and patience. In this case, using static buffers is basically implementing swap manually in user programs rather than letting the OS do it. Thinking of it that way may help to implement it, but if you're implementing both the OS and user programs, why not put it in the OS? Or forget it, focus on the easy jobs, and call it a retro OS.

Programming has got easier, requiring less training and thought, as features like swap and malloc have been incorporated into operating systems and standard libraries. On the other hand, there are programming jobs which don't need them, and there is fun to be had with those jobs. For example, there is fun to be had both writing and using an image editor which only works on images which fit into RAM. (Just don't tell my artist friends! ;) Not really, they're quite accepting folk.) As modern hardware has so much RAM, there are more jobs than ever which don't require swap. For another example, Plan 9's window system is quite wasteful of memory, but very easy to program. In the early 90s it must have been limited to just black and white, but give it a couple hundred megabytes and it's fine with color. Windows PCs of the era were already showing 256 colors, but their window system was harder to program.


Top
 Profile  
 
 Post subject: Re: Programs that truly need paging?
PostPosted: Tue Jul 09, 2019 10:30 am 
Offline
Member
Member
User avatar

Joined: Thu Oct 13, 2016 4:55 pm
Posts: 326
eekee wrote:
I was thinking "swapping", but I used the word "paging"
Oh, I was answering to ~, interestingly I did understand that way what you wrote :-) Probably because you have also used the phrase "swap in Linux".

eekee wrote:
because doesn't swap require page tables anyway?
Actually no. The first time sharing systems "swapped" out entire processes. In DOS, there were a technique when you mapped in portions of code from disk when they were needed. Of course it wasn't called swapping back then, it was called overlays, and it was the app not the OS kernel that did the swapping, but the technique was essentially the same.

eekee wrote:
Overall, I think it's always possible to find a way to do every task with static buffers
I'm not sure about this. Think about a compiler which has to keep a list of defined functions as it parses the source for example. If you allocate that list static, you'll end up having an upper limit in number of functions it can compile. I have never seen that in practice (although possible). I mean we shouldn't forget that not everything can be swapped out. It would be highly ineffective if the list of functions (or symbols) would have to be swapped in and out for every single token. Therefore while in theory yes, you could use static buffers only, in practice it's not viable.

eekee wrote:
Programming has got easier, requiring less training and thought
Imho this is a common misconception. Just because a language allows expressions on a higher level, it doesn't mean it requires less training and though. I know my opinion on this goes against the mainstream (but mainstream is starting to realize I'm right). I've seen way too many shitty apps in java and especially with jQuery. Imho a good programmer can write good code in a lower level language as well as in a higher level language, while a bad (untrained/unthoughtful) programmer can only write bad code in a higher level language. Therefore it is not true that programming got easier, it's just something big companies want you to believe (because they are making money of selling their programming tools). A couple years ago everybody was hiring Indian programmers because they were uneducated and cheap. Now that trend stopped, because everybody have realized that it doesn't matter if they use a modern language like java, go or rust, if you want quality results, you need well-educated programmers no matter what. Same happened to HTML. Do you remember Netscape Composer and Dreamweaver? Everybody thought at first that high quality wyswyg editors will render manual HTML coding obsolete. But they did not.

eekee wrote:
For another example, Plan 9's window system is quite wasteful of memory, but very easy to program. In the early 90s it must have been limited to just black and white, but give it a couple hundred megabytes and it's fine with color. Windows PCs of the era were already showing 256 colors, but their window system was harder to program.
I see what you mean. There's truth in it, nowdays we don't have to code so carefully because there's more resource. But, I don't think it's good idea on the long run. Same thinking was used in the 80's (waste resources just because we can), and look where it got us in a short 40 years: accumulated into a very likely mass extension event with global warming. To get you a closer example: a Win10 and Linux boots considerably slower on a cutting-edge modern computer than DOS did on an average 386 PC. Which is imho insane, because today's computers are about a hundredthousand times faster at least, but modern OSes provide far less than hundred times more features compared to DOS. They should boot at least thousand times faster, not slower...

Cheers,
bzt


Top
 Profile  
 
 Post subject: Re: Programs that truly need paging?
PostPosted: Tue Jul 09, 2019 11:33 am 
Offline
Member
Member
User avatar

Joined: Fri Feb 17, 2017 4:01 pm
Posts: 381
Location: Ukraine, Bachmut
bzt, what unix calls swapping is called paging. and the latter is more correct term, because paging is now done on the per page basis, instead of swapping the entire process's working set to another one. swapping is an ancient and not relevant term from 70ths.

what intel calls paging is actually virtual memory switching on. and I suspect, ~ talks about it.

paging is needed, it's cool. with storages like NVMe, SATA SSDs, UFS, its benefits get even more attractive. and given the efficient use of RAM is using it always at full, there is no such a situation, as "plenty of RAM". cache should consume it. and it's mapping, so again we have paging here. paging is not only when, say, part of process pool gets thrown away in the page file, - as well, paging is any mapping between a disk file and memory buffer intended to provide physical storage for virtual memory - either primary, RAM, and/or secondary (backing) storage, - disk. when the system decides, there is no room for the page in the primary storage, paging happens - the page is taking a hike on the backing storage, its backing file, which is either pagefile or any file bound to that page.

_________________
future big goal: ANT - NT-like OS for mips, arm and x86.
current smaller goal: efify - UEFI for a couple of boards (mips and arm).


Top
 Profile  
 
 Post subject: Re: Programs that truly need paging?
PostPosted: Tue Jul 09, 2019 12:55 pm 
Offline
Member
Member
User avatar

Joined: Sun Sep 19, 2010 10:05 pm
Posts: 1065
A hex editor.

_________________
Project: OZone
Source: GitHub
Current Task: DOSBox Compatibility
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott


Top
 Profile  
 
 Post subject: Re: Programs that truly need paging?
PostPosted: Thu Jul 11, 2019 10:06 am 
Offline
Member
Member
User avatar

Joined: Mon May 22, 2017 5:56 am
Posts: 108
bzt wrote:
eekee wrote:
I was thinking "swapping", but I used the word "paging"
Oh, I was answering to ~, interestingly I did understand that way what you wrote :-) Probably because you have also used the phrase "swap in Linux".


Ah! :)

bzt wrote:
eekee wrote:
because doesn't swap require page tables anyway?
Actually no. The first time sharing systems "swapped" out entire processes. In DOS, there were a technique when you mapped in portions of code from disk when they were needed. Of course it wasn't called swapping back then, it was called overlays, and it was the app not the OS kernel that did the swapping, but the technique was essentially the same.

So when it was swapping, it wasn't called swapping? *ducks* :D Terminology is a crazy thing, isn't it?

bzt wrote:
eekee wrote:
Overall, I think it's always possible to find a way to do every task with static buffers
I'm not sure about this. Think about a compiler which has to keep a list of defined functions as it parses the source for example. If you allocate that list static, you'll end up having an upper limit in number of functions it can compile. I have never seen that in practice (although possible). I mean we shouldn't forget that not everything can be swapped out. It would be highly ineffective if the list of functions (or symbols) would have to be swapped in and out for every single token. Therefore while in theory yes, you could use static buffers only, in practice it's not viable.

I did mean it like your latter example, the the list of functions being swapped in and out. It would be very slow, but I think it was a suitable way to do some tasks on the limited machines of the 70s and 80s. Perhaps not compiling; if the symbol table is bigger than available memory, the program likely would be too. Perhaps databases.

That reminds me, databases were typically indexed; generating smaller files which could be searched faster. That's another way to help with memory management issues. I think Unix .a files have an index within the file, perhaps at the front. They're libraries for static linking, and can be seen as databases of code. In fact, this concept is huge...

@Everybody: Structure data so you can make the code efficient. When you can't, consider generating indexes. I'm sure I've read articles and statements by well-known professionals on the power of structuring data for efficient processing. I'm sorry I can't remember any of the details now, but I'm sure it's a very powerful tool. Indexes are one example. Another is the fixed-length fields and records of (popular) databases.

(This post is clearly too long, bold alone doesn't make text stand out at all. Sorry.)

bzt wrote:
eekee wrote:
Programming has got easier, requiring less training and thought
Imho this is a common misconception. Just because a language allows expressions on a higher level, it doesn't mean it requires less training and though. I know my opinion on this goes against the mainstream (but mainstream is starting to realize I'm right). I've seen way too many shitty apps in java and especially with jQuery. Imho a good programmer can write good code in a lower level language as well as in a higher level language, while a bad (untrained/unthoughtful) programmer can only write bad code in a higher level language. Therefore it is not true that programming got easier, it's just something big companies want you to believe (because they are making money of selling their programming tools). A couple years ago everybody was hiring Indian programmers because they were uneducated and cheap. Now that trend stopped, because everybody have realized that it doesn't matter if they use a modern language like java, go or rust, if you want quality results, you need well-educated programmers no matter what. Same happened to HTML. Do you remember Netscape Composer and Dreamweaver? Everybody thought at first that high quality wyswyg editors will render manual HTML coding obsolete. But they did not.

Maybe I'm a bit out of date. In fact, companies in general realising the value of good programmers is news to me, and it's good. All the same, I question how practical it is, and how long it will last. "Making programming easier" has been a huge drive since the 50s and 60s. It's not because of the companies selling tooling, it's because of the need; good programmers are too rare. Take my generation for example. Lots of kids wanted to be programmers. Competition and innovation saturated our schooling, science fiction saturated our entertainment. Computers were the pinnacle of innovation at that time. In business, entertainment, schooling, and the space program, computers were the cutting edge. If you could program a computer, just-about everyone was impressed! Education was well-funded. We had been focussed, as much as the world can focus children, to fit into the dot-com bubble, which started about 3 years after we left high school. It wasn't enough. The dot-com bubble was characterised by the mass hiring of incompetant programmers because there weren't enough good programmers to go around.

The Soviet Union had the same problem. They were better than the West at using the human resources they had; no gender bias, no age bias, but it wasn't enough. They had teams of little old ladies programming computers back when computers still needed to be run in chilled rooms -- the little old ladies wore fingerless gloves to type. I have no doubt they were well-trained, the Soviet Union was very big on education. It wasn't enough. Starting in the late 50s, they stole IBM designs and secretly bought IBM software because they couldn't develop enough software on their own. (IBM was quite supportive, believe it or not! Most of the trades took place in Hong Kong.)

What about us, developing operating systems solo? Well, "easier" doesn't really apply in the same way, but it does still affect us. If we're working to a standard, we need to implement features and support languages which people in the past thought would make programming easier. If we're developing our own designs, it's wise to develop systems which will make future development and design quicker -- which is another definition of easier, one which is not at all in conflict with what you're saying, bzt. ;)

bzt wrote:
eekee wrote:
For another example, Plan 9's window system is quite wasteful of memory, but very easy to program. In the early 90s it must have been limited to just black and white, but give it a couple hundred megabytes and it's fine with color. Windows PCs of the era were already showing 256 colors, but their window system was harder to program.
I see what you mean. There's truth in it, nowdays we don't have to code so carefully because there's more resource. But, I don't think it's good idea on the long run. Same thinking was used in the 80's (waste resources just because we can), and look where it got us in a short 40 years: accumulated into a very likely mass extension event with global warming. To get you a closer example: a Win10 and Linux boots considerably slower on a cutting-edge modern computer than DOS did on an average 386 PC. Which is imho insane, because today's computers are about a hundredthousand times faster at least, but modern OSes provide far less than hundred times more features compared to DOS. They should boot at least thousand times faster, not slower...

I wonder, perhaps this is a result of failing to structure data for efficient processing. In many cases, it's fairly clear slow program start-up is because the data for a program is indexed or re-structured while the program is starting up. The big argument for this is that it eliminates the risk of supplying the wrong index for the data.

But now you used booting as an example, I'm specifically wondering why Plan 9 boots more slowly than FreeDOS on my OS-dev box. FreeDOS boots faster even with a 5 second delay for a menu. It's especially surprising because booting was a key part of how Plan 9 is designed to be used. By design, Plan 9 users work from diskless workstations. Logging in is part of the boot process, and logging out is accomplished by switching the terminal off. (This is mind-blowing to many! :D) (It's possible to use it in a more monolithic way, but that's the design.)

Now I need to investigate this. I vaguely remembering Plan 9 booting faster in the distant past. ... Ah-hah! Part of the delay appears to be starting the filesystem. That's a rarely-done task in a normal Plan 9 network; the file server is the hub of the network so it stays up. (Mine's a monolithic installation.) It's not the whole story... Okay, helps to plug the ethernet in of course. The most noticeable remaining delay is polling EHCI before bringing up the filesystem. DOS doesn't do USB, and I don't think that old fast-booting Plan 9 could boot from it. (I'd like to avoid it.) I guess there's more. Maybe it's crept in as 9front has implemented fixes and support for modern hardware. Maybe I could slim down my boot scripts. :)

Edit: I just noticed I don't load a cd/dvd-rom driver in freedos because its search is slow. ;) There's no such drive in the machine, anyway.

bzt wrote:
Cheers,
bzt

Cheers! Replying to this was stimulating. :)


Top
 Profile  
 
 Post subject: Re: Programs that truly need paging?
PostPosted: Thu Jul 11, 2019 5:37 pm 
Offline
Member
Member

Joined: Mon Jul 25, 2016 6:54 pm
Posts: 171
Location: Adelaide, Australia
I was actually thinking about this the other day, how would I test a pager implementation in a repeatable way. Most programs these days don't need to do much paging because main memory is much larger than the working set of anything except maybe a production DB, and that requires hundreds of GB of disk space, would be slow to instantiate and it would not be that useful in a automated test environment where repeatability is key. Even modern AAA+ games are designed to fit entirely in main memory.

My idea was to design a test application with parameters to define main memory, swap space, cache sizes, etc. On start up it would consume more memory than is available in main memory with nothing but noise with regular check-sums. By reading, validating and then altering and rewriting pages in alternately structured patterns and peudorandom patterns you could guarantee that pages were being forced out of RAM and into 'swap space' while simultaneously testing that the pager never fails to provided requested pages, but also testing that it never fetches incorrect or corrupted data. Alternating usage patterns would also stress test various heuristic based optimisations, ensuring they achieve acceptable results even when underlying access pattern assumptions are broken.

The downside of course is that it would not reflect a realistic usage pattern, though to be honest a realistic usage pattern, in a well designed application, could almost always be assumed to simply be "maintain a working set smaller than main memory" which would rather defeat the purpose of the test. Another downside is that this doesn't test paging of executable memory, focusing only on data. This could be a problem if your pager treats executable memory differently.

Anyway, that was my idea, though I unfortunately haven't had a lot of time lately to build my pager so it's not progressed beyond that. Would be good to get your feedback, anything I've overlooked or what-have-you.


Top
 Profile  
 
 Post subject: Re: Programs that truly need paging?
PostPosted: Fri Jul 12, 2019 12:32 am 
Offline
Member
Member

Joined: Thu May 17, 2007 1:27 pm
Posts: 592
I tested writeback and the page cache in managarm simply by limiting its size to 1 MiB and booting the system (i.e., the VMM tries to evict pages to disk as soon as more than 1 MiB of file/swap-backed pages are in use).

_________________
managarm: A microkernel-based OS that is capable of running a Wayland desktop


Top
 Profile  
 
 Post subject: Re: Programs that truly need paging?
PostPosted: Fri Jul 12, 2019 6:00 am 
Offline
Member
Member
User avatar

Joined: Thu Oct 13, 2016 4:55 pm
Posts: 326
Hi,

@eekee: I also do enjoy intelligent conversations! :-) You made lots of interesting points too, and I also think what we say aligns.

About the terminology, absolutely true. I think this has to do something with the fact that overlays were used long before virtual memory was even invented. Besides, IT tend to like marketing buzzwords, naming the same thing over and over again... Just think about the old mainframe-terminals architecture, where all the data was stored on the mainframe not on the terminals. Basically these days we have the same architecture again, but now it's called "cloud".

About static allocation: yes, as I've said, in theory possible, but in practice it's not viable.

About the SU I don't agree. I was unfortunate enough to grow up there. It was different, no documentation was available at all, therefore people who wanted to learn programming had to learn reverse engineering first. Simply there was no other way. This lead to a completely different thinking scheme than on the West, which resulted a much deeper understanding of the computers (and much less, but more educated programmers). Do you remember Volkov Commander? It knew everything that Norton Commander, but it was much smaller, only 64k in a single .COM file. I remember I read an interview with C64 chief engineer once. He said that when they showed him a scene demo from the Eastern block, he was shocked, had absolutely no idea that the machine he designed was capable of doing such things :-) It is not surprising at all that most sophisticated viruses and spyware came from Russia, it can be tracked back to the reverse engineering nature of their knowledge base.

@StudyCaps: I don't think so. I think you were right. Think about it, with 64 bit addresses and the RAM getting cheaper and cheaper, the memory became several times larger than the actual storage space. Put swapping next to this (which is transparent to the application), and "maintain a working set smaller than main memory" does not apply any more. With these modern computers you can safely assume that any working set fits in the virtual address space entirely, therefore your idea was valid, imho.

@Korona: and what were the results of your test? What have you concluded?

Cheers,
bzt


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 14 posts ] 

All times are UTC - 6 hours


Who is online

Users browsing this forum: Bing [Bot], Google [Bot], PoisonNinja and 6 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group