I know there are lots of questions like this , but the answers were of no help . I followed the instructions from the Bare Bones - OSDev Wiki http://wiki.osdev.org/Bare_Bones
(create a bootloader and load a kernel to output a string) .Everything went fine . But I thought i could create a simple log file for a very basic game .And on google all the answers used some external libraries (<linux/kernel.h> , <fcntl.h> ,<sys/types.h>... etc.) which I don't have or don't know how to link to the kernel.
When you read a file there are multiple steps at multiple layers. For the highest level:
- VFS does a recursive meta-data walk (e.g. for "/foo/bar/baz.txt" it checks for a directory called "/foo", then checks for the sub-directory "bar", then checks for "baz.txt"). This requires some kind of cache (otherwise it's a massive performance disaster), where VFS to ask a file system to fetch directory info when there's a "VFS meta-data cache miss". At each step during this recursive meta-data walk VFS is checking the types (e.g. is "bar" a symbolic link or a mount point) and doing file permission checks.
- If the recursive meta-data walk was fine; VFS uses the entry (in its meta-data cache) to find out if the file's data is in the VFS file data cache. VFS asks a file system to fetch the file data when there's a "VFS file data cache miss".
Note that managing these caches efficiently means doing things like pre-fetching data from disk into VFS cache (when there's lots of free RAM and the hardware (disk controllers, etc) have nothing better to do; and allowing the RAM to be taken back by the kernel when it needs more free RAM. Also all of this should be asynchronous - if one process asks the VFS for something that results in any kind of "VFS cache miss" then you don't want all other processes to have to wait for that, especially when other processes are trying to do things that can be satisified from VFS cache. Ideally, even when everything is a "VFS cache miss" you still want "many file IO requests in flight" so that (e.g.) storage device drivers can optimise disk access patterns (minimise seeks, etc), and "IO priorities" so that the lower layers (file system and storage device drivers) have some clue about what is/isn't important (and can make sure more important things happen sooner).
For the next level (file systems); in general, you get a request to fetch something from VFS (directory info, or file data) and convert them into requests to fetch something from a storage device driver (blocks/sectors from a partition). This depends on what kind of file system, and it involves caching things that VFS doesn't already cache (e.g. the "cluster allocation table" for FAT). Of course this also needs to be asynchronous, and should keep track of "pending requests" and perform them in an optimised order (and not necessarily the order requests arrive) taking into account the IO priorities.
For the next level (storage device drivers); in general, you get a request to fetch something from a file system (or kernel for swap space) and convert them into requests for the hardware itself to do. This also needs to be asynchronous, and should keep track of "pending requests" and perform them in an optimised order (and not necessarily the order requests arrive) taking into account the IO priorities.
Before you can write a storage device driver, you will need:
- Support for physical and virtual memory management
- Support for PCI bus enumeration
- Support for starting drivers
- Support for IRQ handling
- Support for "memory mapped PCI" areas
- Support for time (e.g. "nanodelay()", time-outs, etc)
- Ideally; a documented "storage device driver interface" design