"If there's an extremely weak/ineffective attempt to control access (which could just be a sign saying "no entry" and nothing else) then it's easy to enter but unethical to do so without permission."
- What about failed attempts? It's still an attempt. Or suppose the sign falls off and is eaten by rodents. Am I expected to honor it even though I have no practical way of knowing about it? Even if I had prior knowledge of it, I have no way of knowing if it's been removed on purpose.
The intent is what matters, not the outcome. If you design an OS that tries to bypass another OS's security, then the OS design is unethical regardless of whether it succeeds or fails to bypass another OS's security (in the same way that attempting murder is unethical even if you don't succeed).
- Maybe just bad wording, but aren't emergency personnel (police, firefighters, etc) entering without permission? Is it unethical?
If I hire you to do a job (e.g. protect my house from fire) then permission is implied; and if you use a flimsy excuse ("I don't know if I have permission") to avoid doing the job that I'm paying you to do then that would be extremely unethical.
If an entire community of people collaborate (via. forming a government, etc) to hire people (and pay them via. a system of taxes) to do a job (e.g. protect all the houses from fire), then permission (e.g. "in case of fire and only in case of fire, you have permission to put that fire out") is implied; and if emergency service personnel use a flimsy excuse ("I don't know if I have permission") to avoid doing the job that they're paid to do then that would be extremely unethical.
In the context of operating systems and files; an OS is not like an emergency service (e.g. fire department) - its reason to exist is not "to rescue files belonging to other OSs" and there is no "permission to access files for the purpose of rescuing them" implied. Something like a data recovery specialist that tries to recover the data off of the dead hard drive (usually for a large fee) would be like an emergency service (and would have implied permission to recover the files) but that is not an OS.
"The strength of the security has nothing to do with ethical behaviour"
- The strength may convey intention, so it certainly has something to do with ethics. Suppose there are cabins in the woods with closed doors (but not locked, or very simple "locks" that don't require keys). The intent of the door and possible simple "lock" is to keep animals out, but not people. If these cabins are clearly intended for people to use at will then it can't be unethical, can it?
If another OS's file system is designed to prevent animals from accessing files but allow all people access to those same files; then, if and only if your OS prevents animals from accessing the files, it'd be fine if your OS allows people to access the files (because your OS is honouring the file system permissions).
So all in all, I think it's more than the simple existence of security that defines the line between ethical vs. unethical. Maybe the intent of the security? But also expectations. If the FS has no security then I don't think it's unethical for the software to allow access to it, it would be upon the person to do the ethical thing and not access something they're not supposed to. Also, I think it's unfair to demand "ethics" from software that will to some extent cripple that software when 1001 other software already exist that can bypass it, so in effect any "security" that simple ACL based FS have it's already lost anyway.
The problem is that you have no practical way of knowing who does/doesn't have permission or what the intent of some other OS may be. If there's evidence of an attempt to restrict some people's access (e.g. a permission system) then you can't assume that there's no attempt to restrict some people's access.
Note that for some file systems there are clear/unambiguous "anyone can access" permissions (e.g. the S_IROTH, S_IWOTH and S_IXOTH flags in ext2 inodes); and it might or might not be unethical for your OS to (e.g.) allow read access to file/s that are clearly marked as "anyone can read" by another OS. However (at least in theory), the other OS might (e.g.) use a log of "who accessed what" (for security audits, etc) stored elsewhere and not stored in that file system; so you might still be bypassing (part of) the other OS's security by reading from an "anyone can read" file without updating a log stored elsewhere. Note: Linux does support this kind of logging.
Note, I think that even Linux can't honor the "security" of ext2. Suppose the FS is corrupted and you re-install the OS and the passwd file is also lost. Does that mean that all the files should now be thrown out? And if even Linux can't honor the security then there can't really be an expectation for you to do it with your OS can there?
If a user of my OS has problems with a different OS (either because the other OS is crap or they didn't have backups or whatever) I might feel sorry for them (and I might laugh in their face), but it is not my problem and not something my OS should care about. My OS should care about ensuring that it can never happen to users of my OS (possibly by ensuring redundancy/backups? Not sure).
So I think it's limiting for software at that level to impose "ethics" and thus must let it be taken care of at a higher level, by people. As a final thought, if you have an encrypted container, just happen to guess the correct key is it ethical or unethical for your OS to allow access?
It's unethical for an OS to allow an encrypted container to be accessed if that container does not belong to the OS. It makes no difference if the key is correct or not (and therefore makes no difference if the correct key was guessed or not).
If the encrypted container does belong to the OS, then the OS shouldn't have any reason to ask any user what the key is (e.g. it should generated by the OS and then stored in some form of sealed storage
, which is how most modern disk encryption schemes
The OS is wrong entity to try to enforce ethics in this instance, though it can certainly do it for those FS's where it knows the answer, but foreign filesystems that have no mapping to your security "domain" (for example user to user mappings) then it can't simply assume the user is unethical.
Nonsense. An OS designer must (intentionally or unintentionally) choose between enforcing or bypassing the security policies of other OSs; and the OS is the only entity able to enforce the ethics (or lack of ethics) involved in that choice.
Note that I doubt you (or others) truly believe that bypassing the security policies of other OSs is ethical; I think you are only trying to defend tradition and/or convenience.
As an added bonus it only means that people will use some other OS to unethically access the files, "convert" them to some other format (like your FS image) so that your system will accept them. So it's mostly a pointless battle..
"Your OS should kill people, because if it doesn't users will just find a different way to kill people"