OSDev.org

The Place to Start for Operating System Developers
It is currently Fri Mar 29, 2024 3:40 am

All times are UTC - 6 hours




Post new topic Reply to topic  [ 143 posts ]  Go to page Previous  1, 2, 3, 4, 5, 6, 7 ... 10  Next
Author Message
 Post subject: Re: CPU bug makes virtually all chips vulnerable
PostPosted: Thu Jan 04, 2018 1:23 pm 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1925
Location: Athens, GA, USA
It's The Geri-pocalypse!

Also, Terry Davis was right about running only in Ring 0. Who could have guessed that?
</absurdity>

/me giggles maniacally while staring blankly into space

Getting back to reality, does anyone have anything sensible to add? No, ~, that explicitly discounts you.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
 Post subject: Re: CPU bug makes virtually all chips vulnerable
PostPosted: Thu Jan 04, 2018 1:25 pm 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1925
Location: Athens, GA, USA
Korona wrote:
The fixes for Meltdown unmap the kernel from user space applications. They do not perform cache flushes. You can find the Linux patch here: click me.

Quote:
"Never store any security-sensitive data in any caches" should be a common secure programming practice

Just no.


I think that what ~ is failing to grasp is that the CPU caches are not the same thing as the cache buffers used in reading I/O. Tilde is completely misunderstanding the nature of the problem.

In light of this, I would like to ask the mods to Jeff all of the discussion relating to ~'s comments to a separate thread. Preferably one listed in Auto-Delete.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
 Post subject: Re: CPU bug makes virtually all chips vulnerable
PostPosted: Thu Jan 04, 2018 1:51 pm 
Offline
Member
Member
User avatar

Joined: Wed Aug 17, 2016 4:55 am
Posts: 251
Oh boy this thread exploded while I was sleeping.

~ wrote:
Wouldn't it be enough to just disable the CPU cache entirely at least for security-critical machines as it's easier to configure as the default? Or invalidate the entire cache every time we switch/enter/exit/terminate/create a process or thread?

Disable the cache and you will end up with a computer that runs about as fast as the average PC in the '90s. RAM is slow.

Brendan wrote:
In theory, managed languages could work (by making it impossible for programmers to generate code that tries to access kernel space); but quite frankly every "managed language" attempt that's ever hit production machines has had so many security problems that it's much safer to assume that a managed languages would only make security worse (far more code needs to be trusted than kernel alone), and the performance is likely to be worse than an PTI approach (especially for anything where performance matters).

Considering we have rowhammer working from javascript? (a language that is literally compiled at load time and where the result may vary depending on the particular browser version) Managed languages are not going to save you when even higher level ones can be used to mess with low-level bugs. (EDIT: and now that I read, spectre works from javascript too apparently?)

So yeah, that idea is immediately ineffective already.

Brendan wrote:
For "make kernel pages inaccessible" it doesn't necessarily need to be all kernel pages. Pages that contain sensitive information (e.g. encryption keys) would need to be made inaccessible, but pages that don't contain sensitive information don't need to be made inaccessible. This gives 2 cases.

If PCID can't be used; then you could separate everything into "sensitive kernel data" and "not sensitive kernel data" and leave all of the "not sensitive kernel data" mapped in all address spaces all the time to minimise the overhead. For a monolithic kernel (especially a pre-existing monolithic kernel) it'd be almost impossible to separate "sensitive" and "not sensitive" (because there's all kinds of drivers, etc to worry about) and it'd be easy to overlook something; so you'd mostly want a tiny stub where almost everything is treated as "sensitive" to avoid the headaches. For a micro-kernel it wouldn't be too hard to distinguish between "sensitive" and "not sensitive", and it'd be possible to create a micro-kernel where everything is "not sensitive", simple because there's very little in the kernel to begin with. The performance of a micro-kernel would be much less effected or not effected at all; closing the performance gap between micro-kernel and monolithic, and potentially making micro-kernels faster than monolithic kernels.

Note: For this case, especially for monolithic kernels, if you're paying for the TLB trashing anyway then it wouldn't take much more to have fully separated virtual address spaces, so that both user-space and kernel-space can be larger (e.g. on a 32-bit CPU, let user-space have almost 4 GiB of space and let kernel have a separate 4 GiB of space).

If PCID can be used (which excludes 32-bit OSs); then the overhead of making kernel pages inaccessible is significantly less. In this case, if nothing in the kernel is "sensitive" you can do nothing, and if anything in the kernel is "sensitive" you'd probably just use PCID to protect everything (including the "not sensitive" data). In practice this probably means that monolithic kernels and some micro-kernels are effected; but "100% not sensitive micro-kernel" wouldn't be effected.

In other words; it reduces the performance gap between some micro-kernel and monolithic kernels, but not all micro-kernels, and probably not enough to make some micro-kernels faster than monolithic kernels.

How do you get a non-sensitive kernel? Even in the smallest kernels, the kernel is the one in charge of taking care of assigning memory to each process, and that's probably the most sensitive part of the whole system since the kernel is the one granting permissions to everything else. The kernel itself may not hold the sensitive information but messing with it can open the gates to accessing said sensitive information elsewhere.

Mind you, it's possible I'm misunderstanding or overlooking something in what you're saying.

Brendan wrote:
The other thing I'd want to mention is that for all approaches and all kernel types (but excluding "kernel given completely separate virtual address space so that both user-space and kernel-space can be larger"), the kernel could distinguish between "more trusted" processes and "less trusted" processes and leave the kernel mapped (and avoid PTI overhead) when "more trusted" processes are running. In practice this means that if the OS supports (e.g.) digitally signed executables (and is therefore able to associate different amounts of trust depending on the existence of a signature and depending on who the signer was) then it may perform far better than an OS that doesn't. This makes me think that various open source groups that shun things like signatures (e.g. GNU) may end up penalised on a lot of OSs (possibly including future versions of Linux).

The problem is that software is buggy (programmers are not perfect). Your executable may be from a trusted source and still have some exploit that can be used to attack the machine from an outside vector (e.g. data coming in).

Digitally signing the executable is only useful to (supposedly reliably) know that the executable is the one you were intended to get, but says nothing about the actual safety of it.

Korona wrote:
There is no absolutely effective software defense against Spectre. We would need ISA updates (e.g. an instruction that invalidates speculative state like the branch prediction buffer). The PoC does not even depend on RDTSC and can read Chromes address space from JavaScript.

The ISA itself is fine, the problem is the implementation. From what I gather, it affects every CPU with out-of-order execution by definition (it's a timing attack), so pretty much every CPU from the '90s onwards. Yeowch. It's true that a different ISA would help make up for the lack of OOO if we went down that route, but even then it'd be pretty bad.

Honestly with timing attacks it's often better to make them useless than to try to prevent them.

Korona wrote:
It seems that the Spectre exploit can be mitigated on Intel by replacing all indirect jumps with the sequence

*snip*

.. which is ugly at best and also somewhat inefficient: It introduces of two jumps AND prevents branch prediction. It seems that GCC will be patched to use this sequence. People who wrote their OSes in assembly: Have fun fixing your jumps :D.

For calls it gets even uglier as you need a call to a label to push the current RIP before you jump to the trampoline.

And then an attacker will just use their own code that works for the exploit and make that moot =P

~ wrote:
Now that I think about it, I think that this problem could also be greatly mitigated if most of the data of a program was put on disk and then only a portion of it loaded at a time. The memory usage would be much better, much lower, and storing all data structures mainly on disk for most applications would make hitting enough of very private data (enough to be usable or recognizable) too difficult and infrequent, so storing all user data on disk could also be an option.

Then you have the same performance problem as disabling caches, but a tad worse. Also as a bonus you put so much more wear on the drive that it will stop working much sooner (which is a potentially even bigger problem - you pretty much just DOS'd the hardware!).


Top
 Profile  
 
 Post subject: Re: CPU bug makes virtually all chips vulnerable
PostPosted: Thu Jan 04, 2018 1:55 pm 
Offline
Member
Member
User avatar

Joined: Tue Mar 06, 2007 11:17 am
Posts: 1225
Sik wrote:
~ wrote:
Wouldn't it be enough to just disable the CPU cache entirely at least for security-critical machines as it's easier to configure as the default? Or invalidate the entire cache every time we switch/enter/exit/terminate/create a process or thread?

Disable the cache and you will end up with a computer that runs about as fast as the average PC in the '90s. RAM is slow.
You could set aside uncached pages for a program only for tasks like logging in for holding user/password, or for hashing/calculating SSL data, or for cryptographic programs, choose the most critical parts in the chain of encryption steps to avoid having leaking in cache (which are a few bytes in size). With that usage of uncached memory limited only to critical security data and other data chosen by programs and users, the computer could never be really slowed down enough to be noticeable and this problem would likely become as irrelevant as some random protection code that was badly implemented.

The cached code of the running program would still be normally active so uncaching a few data buffers won't take away the advantages of cached code, so it could be an irrational fear to a really good solution.

_________________
Live PC 1: Image Live PC 2: Image

YouTube:
http://youtube.com/@AltComp126/streams
http://youtube.com/@proyectos/streams

http://master.dl.sourceforge.net/projec ... 7z?viasf=1


Top
 Profile  
 
 Post subject: Re: CPU bug makes virtually all chips vulnerable
PostPosted: Thu Jan 04, 2018 2:36 pm 
Offline
Member
Member

Joined: Thu Jul 05, 2007 8:58 am
Posts: 223
@Tilde: Please read: data does NOT need to be in the cache to be vulnerable to Meltdown/Spectre. Merely being in memory is enough.


Top
 Profile  
 
 Post subject: Re: CPU bug makes virtually all chips vulnerable
PostPosted: Thu Jan 04, 2018 2:47 pm 
Offline
Member
Member
User avatar

Joined: Tue Mar 06, 2007 11:17 am
Posts: 1225
What I don't really get is that the paper always seems to mention a set of mispredictions/misses to force memory that initially isn't in cache to force loading it to later try to dump the cache from a different process. So the cache indeed always seems to be involved at some stage as requirement from what the paper says.

So I don't really see how a page that is specifically marked with the Cache Disable bit could possibly be leaked. Those pages are supposed to never be copied in the cache for any operations, or they could cause errors for a tightly-programmed program logic.

I don't think that a page marked with cache disabled is ever placed in the cache memory, not even temporarily. I think that the cache is strict in that if it's marked for not being cached, it will just never be cached not even for temporary operations, isn't that so? It would not make reasonable sane implementation sense otherwise.

So if that's the case, Spectre and Meltdown would be entirely irrelevant if truly private data is always put in pages marked as not cached, they will just never be seen in leakable caches or other places no matter how much that CPU behavior is attempted to try a leak, isn't it so?

_________________
Live PC 1: Image Live PC 2: Image

YouTube:
http://youtube.com/@AltComp126/streams
http://youtube.com/@proyectos/streams

http://master.dl.sourceforge.net/projec ... 7z?viasf=1


Top
 Profile  
 
 Post subject: Re: CPU bug makes virtually all chips vulnerable
PostPosted: Thu Jan 04, 2018 2:55 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Sik wrote:
Brendan wrote:
For "make kernel pages inaccessible" it doesn't necessarily need to be all kernel pages. Pages that contain sensitive information (e.g. encryption keys) would need to be made inaccessible, but pages that don't contain sensitive information don't need to be made inaccessible. This gives 2 cases.

If PCID can't be used; then you could separate everything into "sensitive kernel data" and "not sensitive kernel data" and leave all of the "not sensitive kernel data" mapped in all address spaces all the time to minimise the overhead. For a monolithic kernel (especially a pre-existing monolithic kernel) it'd be almost impossible to separate "sensitive" and "not sensitive" (because there's all kinds of drivers, etc to worry about) and it'd be easy to overlook something; so you'd mostly want a tiny stub where almost everything is treated as "sensitive" to avoid the headaches. For a micro-kernel it wouldn't be too hard to distinguish between "sensitive" and "not sensitive", and it'd be possible to create a micro-kernel where everything is "not sensitive", simple because there's very little in the kernel to begin with. The performance of a micro-kernel would be much less effected or not effected at all; closing the performance gap between micro-kernel and monolithic, and potentially making micro-kernels faster than monolithic kernels.

Note: For this case, especially for monolithic kernels, if you're paying for the TLB trashing anyway then it wouldn't take much more to have fully separated virtual address spaces, so that both user-space and kernel-space can be larger (e.g. on a 32-bit CPU, let user-space have almost 4 GiB of space and let kernel have a separate 4 GiB of space).

If PCID can be used (which excludes 32-bit OSs); then the overhead of making kernel pages inaccessible is significantly less. In this case, if nothing in the kernel is "sensitive" you can do nothing, and if anything in the kernel is "sensitive" you'd probably just use PCID to protect everything (including the "not sensitive" data). In practice this probably means that monolithic kernels and some micro-kernels are effected; but "100% not sensitive micro-kernel" wouldn't be effected.

In other words; it reduces the performance gap between some micro-kernel and monolithic kernels, but not all micro-kernels, and probably not enough to make some micro-kernels faster than monolithic kernels.

How do you get a non-sensitive kernel? Even in the smallest kernels, the kernel is the one in charge of taking care of assigning memory to each process, and that's probably the most sensitive part of the whole system since the kernel is the one granting permissions to everything else. The kernel itself may not hold the sensitive information but messing with it can open the gates to accessing said sensitive information elsewhere.


For an example (based on my micro-kernels) it might go something like this:
  • Kernel's code? Attacker can get that in lots of different ways (it's not even encrypted in files used for installing the OS, and there's no self modifying code) so there's no point treating that as sensitive.
  • The entire physical memory manager's state? Unless the attacker can access raw physical addresses (e.g. maybe a driver for a bus mastering device on a system without an IOMMU) this information is useless for an attacker, and if the attacker can access raw physical addresses there's very little hope of security anyway; so there's no point treating it as sensitive.
  • The entire virtual memory manager's state? Attacker can't really use any of this information either. No point treating it as sensitive.
  • The entire scheduler's state? Attacker can't really use any of this information (and half is deliberately exposed to user-space for things like "top"). No point treating it as sensitive.
  • The message queues? Medium and large messages are stored in "physical address of page or page table" format that is useless for an attacker, so there's no point treating that as sensitive. Smaller messages are stored as "literal message contents" (and messages can contain anything, including passwords, encryption keys, etc) so this would have to be treated as "sensitive"; but I can just store small messages the same way I store medium messages, so everything can be "not sensitive".
  • Resource permissions (for IO ports, memory mapped IO ranges, etc)? Attacker can't really use any of this information (and half is deliberately exposed to user-space so admin/maintenance people can see which devices use what). No point treating it as sensitive.
  • Information about the type and capabilities of each CPU? Deliberately exposed to user-space so software can use it (instead of CPIUD). No point treating it as sensitive.
  • Information about power management (CPU temperatures, speeds, etc)? Deliberately exposed to user-space so software can use it (monitoring tools). No point treating it as sensitive.
  • Encryption key/s used for checking digital signatures? These are public keys that are supposed to be easily obtainable. No point treating them as sensitive.
  • "Crypto random number generator" entropy sources? Would have to be treated as "sensitive". Shift this to a user-space service instead.

Sik wrote:
Brendan wrote:
The other thing I'd want to mention is that for all approaches and all kernel types (but excluding "kernel given completely separate virtual address space so that both user-space and kernel-space can be larger"), the kernel could distinguish between "more trusted" processes and "less trusted" processes and leave the kernel mapped (and avoid PTI overhead) when "more trusted" processes are running. In practice this means that if the OS supports (e.g.) digitally signed executables (and is therefore able to associate different amounts of trust depending on the existence of a signature and depending on who the signer was) then it may perform far better than an OS that doesn't. This makes me think that various open source groups that shun things like signatures (e.g. GNU) may end up penalised on a lot of OSs (possibly including future versions of Linux).

The problem is that software is buggy (programmers are not perfect). Your executable may be from a trusted source and still have some exploit that can be used to attack the machine from an outside vector (e.g. data coming in).

Digitally signing the executable is only useful to (supposedly reliably) know that the executable is the one you were intended to get, but says nothing about the actual safety of it.


Digital signatures tell you who created the executable (e.g. "this executable must have come from Brendan because it was signed by his signature") in addition to telling you if the executable was tampered with after it was signed. On a scale of 1 to 10; how much would you trust an executable that was signed by a large company that can be sued; and how much would you trust an executable that was not signed by anyone?


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: CPU bug makes virtually all chips vulnerable
PostPosted: Thu Jan 04, 2018 3:38 pm 
Offline
Member
Member
User avatar

Joined: Wed Jul 13, 2011 7:38 pm
Posts: 558
Schol-R-LEA wrote:
Also, Terry Davis was right about running only in Ring 0. Who could have guessed that?
</absurdity>


It's too bad Terry's in prison right now otherwise he'd be having a field day.


Top
 Profile  
 
 Post subject: Re: CPU bug makes virtually all chips vulnerable
PostPosted: Thu Jan 04, 2018 4:04 pm 
Offline
Member
Member
User avatar

Joined: Wed Aug 17, 2016 4:55 am
Posts: 251
Brendan wrote:
For an example (based on my micro-kernels) it might go something like this:
  • Kernel's code? Attacker can get that in lots of different ways (it's not even encrypted in files used for installing the OS, and there's no self modifying code) so there's no point treating that as sensitive.

    • ...
    • The entire physical memory manager's state? Unless the attacker can access raw physical addresses (e.g. maybe a driver for a bus mastering device on a system without an IOMMU) this information is useless for an attacker, and if the attacker can access raw physical addresses there's very little hope of security anyway; so there's no point treating it as sensitive.
    • The entire virtual memory manager's state? Attacker can't really use any of this information either. No point treating it as sensitive.
    • ...
    • The message queues? Medium and large messages are stored in "physical address of page or page table" format that is useless for an attacker, so there's no point treating that as sensitive. Smaller messages are stored as "literal message contents" (and messages can contain anything, including passwords, encryption keys, etc) so this would have to be treated as "sensitive"; but I can just store small messages the same way I store medium messages, so everything can be "not sensitive".
    • ...

Isn't the whole issue that attackers could potentially figure out how to mess with physical memory? Now, even in theory with a non-buggy CPU the best you should be able to do is figure out the physical address (through timing attacks), but then of top of that a hardware bug or an OS bug could make it feasible to also mess with said memory page. Seeing as we're discussing workarounds to render the latter unfeasible to use in practice, I'd say that treating the physical location of data in memory as sensitive is a reasonable thing.

Brendan wrote:
Sik wrote:
The problem is that software is buggy (programmers are not perfect). Your executable may be from a trusted source and still have some exploit that can be used to attack the machine from an outside vector (e.g. data coming in).

Digitally signing the executable is only useful to (supposedly reliably) know that the executable is the one you were intended to get, but says nothing about the actual safety of it.


Digital signatures tell you who created the executable (e.g. "this executable must have come from Brendan because it was signed by his signature") in addition to telling you if the executable was tampered with after it was signed. On a scale of 1 to 10; how much would you trust an executable that was signed by a large company that can be sued; and how much would you trust an executable that was not signed by anyone?

1

The executable being confirmed to not be tampered guarantees that no additional bugs/exploits were inserted. It doesn't guarantee that the executable didn't have one because the source program was buggy for starters, and somebody could still find a way to exploit said bugs in a malicious way, and use that to escalate into much worse attacks. Ergo, the kernel should still assume the worst because it can't guarantee the executables it runs are perfectly safe.

If the executable is confirmed to be tampered (when you want to enforce non-tampered executables) then immediately lock down the computer and force a sysadmin to deal with the problem because it's already critical. Your trust level for the whole system should become "screwed over" because something broke a safety assumption. This is a whole different level of security problem though.


Top
 Profile  
 
 Post subject: Re: CPU bug makes virtually all chips vulnerable
PostPosted: Thu Jan 04, 2018 7:46 pm 
Offline
Member
Member
User avatar

Joined: Wed Dec 01, 2010 3:41 am
Posts: 1761
Location: Hong Kong
Brendan wrote:
Digital signatures tell you who created the executable (e.g. "this executable must have come from Brendan because it was signed by his signature") in addition to telling you if the executable was tampered with after it was signed. On a scale of 1 to 10; how much would you trust an executable that was signed by a large company that can be sued; and how much would you trust an executable that was not signed by anyone?


If the signature is extended to include any security audit (if any) information it will be much more useful.


Top
 Profile  
 
 Post subject: Re: CPU bug makes virtually all chips vulnerable
PostPosted: Thu Jan 04, 2018 8:22 pm 
Offline
Member
Member
User avatar

Joined: Wed Aug 17, 2016 4:55 am
Posts: 251
OK the more stuff gets disclosed the more things seem to change so huuuuh screw it. How do Meltdown and Spectre actually work? Because I just keep finding conflicting information on this.

The reason I've brought up Rowhammer in some previous posts is that somebody had speculated you could combine these exploits with it in order to not just read data not belogning to you but also to modify it. Not sure how feasible is that though.

bluemoon wrote:
Brendan wrote:
Digital signatures tell you who created the executable (e.g. "this executable must have come from Brendan because it was signed by his signature") in addition to telling you if the executable was tampered with after it was signed. On a scale of 1 to 10; how much would you trust an executable that was signed by a large company that can be sued; and how much would you trust an executable that was not signed by anyone?


If the signature is extended to include any security audit (if any) information it will be much more useful.

I suppose the idea would be for third parties to audit the code and include their own signature? Then in order to validate it wouldn't be just enough for the executable to not be tampered, but also to have passed through somebody else who verified the code is indeed safe (and may have found mistakes that the original developers didn't).


Top
 Profile  
 
 Post subject: Re: CPU bug makes virtually all chips vulnerable
PostPosted: Thu Jan 04, 2018 9:15 pm 
Offline
Member
Member
User avatar

Joined: Tue Mar 06, 2007 11:17 am
Posts: 1225
This problem only seems to me like a bad synchronization between a protected usage of the cache and the running programs when multitasking.

I always wondered what could happen if I used the cache in a program and then switched to another program without doing anything about the contents of the cache when switching the task.

It seems that my suspicions were right and that if nothing appropriate is done, then leaking of the cache contents can occur unpredictably as shown here, so it would still count as a badly programmed memory management subsystem more than an actual CPU bug (for example why not flush explicitly the most recently used data areas of the current program and wait for flush completion before switching to another task?).

This was probably discovered by an enthusiast programmer or group who wanted to learn about paging and found this behavior while trying to determine how to use the cache in a truly secure way, and they found out that if misused, the cache could leak contents among processes.

_________________
Live PC 1: Image Live PC 2: Image

YouTube:
http://youtube.com/@AltComp126/streams
http://youtube.com/@proyectos/streams

http://master.dl.sourceforge.net/projec ... 7z?viasf=1


Top
 Profile  
 
 Post subject: Re: CPU bug makes virtually all chips vulnerable
PostPosted: Fri Jan 05, 2018 1:38 am 
Offline
Member
Member
User avatar

Joined: Thu Aug 11, 2005 11:00 pm
Posts: 1110
Location: Tartu, Estonia
Brendan wrote:
  • The entire physical memory manager's state? Unless the attacker can access raw physical addresses (e.g. maybe a driver for a bus mastering device on a system without an IOMMU) this information is useless for an attacker, and if the attacker can access raw physical addresses there's very little hope of security anyway; so there's no point treating it as sensitive.

What about rowhammer? Doesn't that aim at changing the contents of a process' page table, such that it can write to it (and thus map arbitrary physical pages into its address space)?

_________________
Programmers' Hardware Database // GitHub user: xenos1984; OS project: NOS


Top
 Profile  
 
 Post subject: Re: CPU bug makes virtually all chips vulnerable
PostPosted: Fri Jan 05, 2018 3:06 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Sik wrote:
Brendan wrote:
For an example (based on my micro-kernels) it might go something like this:
  • Kernel's code? Attacker can get that in lots of different ways (it's not even encrypted in files used for installing the OS, and there's no self modifying code) so there's no point treating that as sensitive.

    • ...
    • The entire physical memory manager's state? Unless the attacker can access raw physical addresses (e.g. maybe a driver for a bus mastering device on a system without an IOMMU) this information is useless for an attacker, and if the attacker can access raw physical addresses there's very little hope of security anyway; so there's no point treating it as sensitive.
    • The entire virtual memory manager's state? Attacker can't really use any of this information either. No point treating it as sensitive.
    • ...
    • The message queues? Medium and large messages are stored in "physical address of page or page table" format that is useless for an attacker, so there's no point treating that as sensitive. Smaller messages are stored as "literal message contents" (and messages can contain anything, including passwords, encryption keys, etc) so this would have to be treated as "sensitive"; but I can just store small messages the same way I store medium messages, so everything can be "not sensitive".
    • ...

Isn't the whole issue that attackers could potentially figure out how to mess with physical memory?


If the attacker can access physical memory then they can scan physical memory looking for paging structures and then read all kernel data (and all data from all other processes) from physical memory regardless of whether it's currently mapped into their address space or not.

Think of it like this:

  • Can attacker access physical memory?
      Yes; there's no security and there's no point caring about anything else until this is fixed somehow.
      No; we have some hope, so we need to care about other things (like the recent speculative execution vulnerabilities).

Sik wrote:
Brendan wrote:
Sik wrote:
The problem is that software is buggy (programmers are not perfect). Your executable may be from a trusted source and still have some exploit that can be used to attack the machine from an outside vector (e.g. data coming in).

Digitally signing the executable is only useful to (supposedly reliably) know that the executable is the one you were intended to get, but says nothing about the actual safety of it.


Digital signatures tell you who created the executable (e.g. "this executable must have come from Brendan because it was signed by his signature") in addition to telling you if the executable was tampered with after it was signed. On a scale of 1 to 10; how much would you trust an executable that was signed by a large company that can be sued; and how much would you trust an executable that was not signed by anyone?


The executable being confirmed to not be tampered guarantees that no additional bugs/exploits were inserted. It doesn't guarantee that the executable didn't have one because the source program was buggy for starters, and somebody could still find a way to exploit said bugs in a malicious way, and use that to escalate into much worse attacks. Ergo, the kernel should still assume the worst because it can't guarantee the executables it runs are perfectly safe.


You're trying to turn this into an "either 0% trusted or 100% trusted, with guarantees" scenario with nothing in between. The reality is that trust is an "anywhere from "0% to 99.9999999% with no guarantees" thing. There is no guarantee that how much a kernel trusts an executable (which is "knowable") has anything to do with how secure the executable actually is (which is "unknowable"). Code that has no digital signature (e.g. where the kernel can't assume that the digital signature would have been blacklisted if a security issue was found) is on one end of the scale (not very trusted, even if it's actually perfectly secure) and code with a valid (not blacklisted) digital signature is at the other end of the scale (more trusted, even though it might not be secure).

Sik wrote:
If the executable is confirmed to be tampered (when you want to enforce non-tampered executables) then immediately lock down the computer and force a sysadmin to deal with the problem because it's already critical. Your trust level for the whole system should become "screwed over" because something broke a safety assumption. This is a whole different level of security problem though.


If an executable is installed on the computer (and the digital signature checked when the executable was installed), but somehow the signature stops being correct after the executable was installed; then you have to assume there's a major problem with the OS (something was able to modify executable files). If there's no digital signature you can't check, so you probably shouldn't trust unsigned executables in the first place.

If an executable is confirmed to be tampered with before it's installed or used (e.g. when it's been downloaded from the internet the first time), then you assume that the problem was at the sender's end (dodgy website) and that your OS is fine. If there's no digital signature you can't tell if the executable was tampered with or not, so you probably shouldn't trust unsigned executables in the first place.

If an executable was created/compiled from source by a local user; then the tools used to compile it would digitally sign the executable with a "local user signature" (so that how much the OS trusts the executable depends on how much the OS trusts the user, which depends on how much the admin trusted the user). Other computers wouldn't accept this signature (e.g. I wouldn't be able to take an executable that I created and that my computer trusts, and copy it "as is" to your computer and expect your computer to trust it). There would have to be "publishing" as a formal step, where "local user signature" (that won't be accepted by other people's computers) is checked and then replaced by "publisher's signature" (that may be accepted by other people's computers).


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: CPU bug makes virtually all chips vulnerable
PostPosted: Fri Jan 05, 2018 3:26 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

XenOS wrote:
Brendan wrote:
  • The entire physical memory manager's state? Unless the attacker can access raw physical addresses (e.g. maybe a driver for a bus mastering device on a system without an IOMMU) this information is useless for an attacker, and if the attacker can access raw physical addresses there's very little hope of security anyway; so there's no point treating it as sensitive.

What about rowhammer? Doesn't that aim at changing the contents of a process' page table, such that it can write to it (and thus map arbitrary physical pages into its address space)?


If you can obtain all of the physical memory manager's state, then you could use that information to make sure rowhammer isn't wasting time modifying pages that are free. For rowhammer to work properly you need to know the physical address of something that matters, and if you know that then you have no reason to care which pages are free.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 143 posts ]  Go to page Previous  1, 2, 3, 4, 5, 6, 7 ... 10  Next

All times are UTC - 6 hours


Who is online

Users browsing this forum: No registered users and 60 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group