There's two issues when it comes to computer security in this situation:
- Granting privilege to components that don't or shouldn't need it
- Vulnerabilities that allow software to be used in ways other than it's supposed to work
These are actually two separate issues, and should be treated as such.
The first issue deals with situations where applications can access parts of the system without the user giving them permission to or without the user being aware of it, without having to use exploits to get access. For the first part, the right access control system prevents most risks. Operating systems like Windows, which give every process access to everything, are the most susceptible to these kinds of attacks. That's why it's so easy to have, for example, a trojan horse on Windows that sends your personal files to a remote server in the background, and the only way to detect this is by actively monitoring disk and network activity with heuristic algorithms (which is a task that a lot of antivirus applications struggle with - does your antivirus software ever say "suspicious network activity detected", or does it only report known threats by name?). Linux by default doesn't really help in this regard to be honest, although at least it gives you the tools to harden your system if you can figure out how to use them (SELinux, kernel namespaces, chroot, and so on). In my opinion, Android is the best in this regard. It gives a full list of permissions before installing an app, and enforces these permissions at the OS level (they even apply to native code thanks to the "one user per app" model). In more recent versions, it also displays a confirmation dialog before granting certain permissions at runtime. The only thing that could make this better is if the permissions were more fine-grained (they're actually quite fine-grained from the developer's point of view, but they're "simplified" for the user), and if Android confirmed and kept track of permissions granted at runtime seamlessly instead of requiring the developer to explicitly check for and request permission every time they need it.
The second issue deals with situations where applications aren't supposed to be able to access particular parts of the system, but are able to get access because of a bug/vulnerability in a piece of software that has whatever access they require. For example, even though javascript code isn't supposed to be able to read the user's files, it may be able to gain access by exploiting a vulnerability in the web browser, which does have access to the user's files. Depending on where the vulnerability resides, access control may help to reduce the impact of such a vulnerability (e.g. if the web browser had to explicitly get permission every time it accessed a file (with user confirmation by the kernel), javascript code wouldn't be able to read the user's files even if there was a vulnerability in the web browser). But this isn't always the case, particularly when the vulnerability lies in system components or other components with access to important parts of the system. The main defence against these attacks is secure programming. Checking for buffers that could overflow, avoiding executing data (including as scripts), particularly data that's obtained externally, such as from a file or over a network (this becomes tricky, because at some point a piece of code stored in a file or downloaded over a network has to be executed). Judging from the past, it seems that Windows is again pretty terrible in regards to this kind of security. This is the main area where Linux is a lot more secure, due to better-written code - while it may be possible for any process to access the user's files, the user has to actually execute the code themselves; it's a lot harder to make code execute without the user requesting it on Linux than it is on Windows, because there are far fewer buffer-overflow vulnerabilities. But the truth is, this is always going to be a risk even with a solid permissions model, and the only defense is to read through code many times over, run it through analysis utilities, and take every other measure to minimise the impact of a vulnerability.
Slightly unrelated, but I also think there's too much focus on making the system secure rather than protecting user's data. It's always a Bad Thing when code is able to get system-level access, but in reality it can probably do just as much damage from the user's point of view without system-level access. Sure, it can't plant itself into the system and make it difficult to remove, take control of the user's input devices, or spread itself via a hidden partition on a USB flash drive, but it can still access and destroy the user's files and send and receive data over the network. This is why access control is so important, because ultimately it's not the integrity of the system that matters but the privacy and integrity of the user's data.