OSDev.org

The Place to Start for Operating System Developers
It is currently Fri Apr 19, 2024 5:46 pm

All times are UTC - 6 hours




Post new topic Reply to topic  [ 39 posts ]  Go to page Previous  1, 2, 3  Next
Author Message
 Post subject: Re: SAS HDD Drive
PostPosted: Thu Aug 17, 2017 2:07 pm 
Offline
Member
Member

Joined: Sun Jun 16, 2013 4:09 am
Posts: 333
Wow, thanks for all the great reply's.

All I need is Identify, Configure, Read, Write, and fault handling.
I am sure the driver could have more added to it, but for what I am doing it is "Perfect" (Yea, time will tell, LOL)
Funny times. Good to laugh...

Once the SATA drive turns up I will test it in a server.

Again, thanks for all the great responses.

Ali


Top
 Profile  
 
 Post subject: Re: SAS HDD Drive
PostPosted: Fri Aug 18, 2017 12:59 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

mallard wrote:
Brendan wrote:
unexpected hot-unplug under load


The only "proper" way to handle that is to alert the user that their data is most likely lost and the filesystem corrupted. The "dirty" bit on most reasonable filesystems provides an indication that recovery is necessary. If it occurs on your "system drive" (the device that contains vital OS files or swap space) you have no choice but to "panic" (or maybe terminate every process that has data swapped-out to the now-unavailable device, assuming that's nothing critical). You might be able to get away with a "please re-connect that device immediately" type message in some very limited cases.


The only proper way for the storage device driver to handle this is to inform whatever was using it (e.g. whatever has "opened" the whole drive or any partition), and adding it to some kind of event log for administrators. How the error (from the storage device driver) is handled by whatever was using (some or all of) the drive depends on what was using it.

For examples; the kernel's swap space manager might respond by remembering that one specific swap provider disappeared, and then (later, only if/when necessary) terminating processes if they can't continue without data from the specific swap provider that was removed; a software RAID layer might switch to some kind of "degraded mode" (continuing normal operation using other drives in the RAID array, but being extra cautious because it might not be able to sustain the removal or failure of any other drive in the RAID array); and a file system might wait for 3 seconds (in case the same drive is plugged back in) before telling VFS to purge all cached data from that file system and terminating itself.

mallard wrote:
Note that there's no way to tell the difference between "user unplugged the device at a bad time" (any user-unpluggable device should have very conservative caching policies and as resilient a filesystem as possible to reduce the impact of this action) and "hardware failure" (e.g. the power supply to an external device frying itself, the user's cat biting through a cable, the device just deciding that it doesn't want to work anymore, etc.).


There's no way for an OS to determine if a SATA device is "user unpluggable" on its own. This means that all SATA devices (including drives bolted inside a sealed computer's case) have to be considered "user unpluggable"; unless the OS provides some way for the user/administator to tell the OS to treat specific drive/s as "not removable" (e.g. maybe some kind of checkbox in "storage device properties" dialog box somewhere).

mallard wrote:
Brendan wrote:
TRIM


Except on very early model SSDs, TRIM varies from useless to dangerous. Competent SSDs internal garbage-collection routines are far more capable and TRIM often has a severe and unpredictable performance penalty (especially on devices that only support the non-queued version). In the best devices it's simply a no-op and in the worst it's buggy and corrupts your filesystem. Unless you've got the resources to test every SSD (family) on the market to work out the small minority of devices where it's correctly implemented and actually improves performance, it's best avoided.


No; you're misinformed.

It's impossible for SSD to know that a block is no longer being used unless the OS tells it that the block is no longer in use (via. TRIM); and impossible for SSD to do any garbage collection unless the OS uses TRIM.

The theory that you've misunderstood comes from a performance hack. The idea is that instead of using TRIM as soon as a block is no longer in use, it's faster for an OS to wait in the hope that the block can be reused/reallocated; and if the block is reused/reallocated then the TRIM can be skipped (and therefore performance can be improved). However, excessive use of this tactic ruins SSD performance for no reason (the number of blocks that aren't used and aren't TRIMed increases over time until the SSD thinks there's no free blocks at all, effectively crippling the SSD's ability to manage its blocks properly).

mallard wrote:
If you also have an encrypted filesystem (something any serious OS should support in 2017) it's even worse, because even in unlikely case that it's implemented correctly by the hardware and necessary for that device, TRIM reveals metadata about which blocks are/are not used by your filesystem over an insecure channel. This metadata is then stored unencrypted in the "private space" on the SSD, where it's easily available to any sufficiently determined attacker.


Who cares? That metadata is worth nothing and could be published on a web site for the entire world to see without causing a security problem.

mallard wrote:
Brendan wrote:
SMART


As pointed out in the linked Wikipedia article, it's impossible to predict whether SMART is going to be available and work correctly in any given configuration. SMART data may be "dropped" by a RAID controller, USB interface, etc. Cheaper devices (particularly SSDs) do not support it. Even where it is available, the attribute numbers are non-standard and vendor-specific, so all you can do is guess that there's a problem based on the "common" list of attributes and display a warning (you have no way of knowing whether these "common" attributes mean what you think they do).


That's like saying that one computer somewhere might not have a network card, so no OS should ever support any networking. An OS should support SMART (in case the hardware does provide the information) and the OS should at least make use any standardised data that is provided (and should probably also have optional modules or something, to be able to support any "vendor specific" data too).

Note that tsdnz is targeting "cloud" - a large number of mostly unsupervised computers containing "enterprise class" hardware, likely (I'd assume) with availability contracts (in the "pay compensation to customers when there's a hardware failure that interrupts service" way).

mallard wrote:
Handling "all possible kinds of errors and drive failures" is completely, 100%, impossible.


Handling all possible errors that occur is 100% possible; even if "handling" just means adding something to an event log and telling other parts of the OS (file systems, etc) that something bad happened. This has nothing to do with SMART (which is mostly about predicting failures before they occur so that you can prevent problems from occurring).

mallard wrote:
Brendan wrote:
  • supports IO priorities (so less important transfers don't ruin the performance of more important transfers)
  • cooperates with OS's caches, including having a robust "write ordering with sync" model for synchronisation


These are definitely features that should be implemented in a higher level of your storage subsystem and are not concerns of the low-level hardware-interfacing storage driver in any reasonably structured system.


The low level driver should keep track of pending requests; so that NCQ can work (and be abstracted), and to reduce latency and improve performance (avoid the additional time/overhead of negotiating with a higher level layer every single time one transfer completes and you want to start the next transfer). The low level driver needs to know about IO priorities to manage pending requests properly. The low level driver also needs to provide/uphold some sort of synchronisation model to make it possible. You can not shift it into a higher level (e.g. file sytem) unless you want to completely ruin performance.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: SAS HDD Drive
PostPosted: Fri Aug 18, 2017 2:56 am 
Offline
Member
Member

Joined: Tue May 13, 2014 3:02 am
Posts: 280
Location: Private, UK
Brendan wrote:
adding it to some kind of event log for administrators


How are you going to do that when you've just lost your primary/only storage device?

Sure, maybe you can send it over a network or to the printer or something, but not in the general case. I'd also question whether having all the "infrastructure" required to make this work stored in "non-swappable" memory so it can work if the primary storage device goes away is really an efficient use of system resources.

Brendan wrote:
There's no way for an OS to determine if a SATA device is "user unpluggable" on its own.


That's true. However, eSATA usage is fairly uncommon; the overwhelming majority of "user unpluggable" devices will be USB (iSCSI should probably also be considered "user unpluggable" since network outages happen). That means a simple heuristic like "has the OS ever seen this device being hot-plugged?" will cover 99.99% of SATA devices. The performance penalty of treating every device as though it might disappear any moment is too severe to be a viable option anywhere you don't absolutely have to.

Brendan wrote:
No; you're misinformed.


All benchmarks of modern SSDs I've seen show little (no more than about 5%, within the realm of experimental error) to no benefit to using TRIM. An example. There's a whole bunch of claims about how TRIM's "magic", claims that it improves longevity, only has an effect if the device is close to full, claims that people see noticeable performance boosts (placebo effect, almost definitely), but nothing concrete. At least one SSD manufacturer has confirmed that they treat TRIM as no more than a "hint" for the garbage collector and that it's basically a no-op.

If you have evidence that TRIM actually has a significant impact on modern (manufactured within the last 5 years) SSDs, I'd love to see it...

Brendan wrote:
Who cares? That metadata is worth nothing and could be published on a web site for the entire world to see without causing a security problem.


If your response to a confidential metadata leak is "Who cares?" I'm not going to take anything you say about security seriously. Knowing this sort of metadata can be used to answer questions like "how much actual data does this encrypted device store?" or "when was the last file updated?", etc. Those answers can be very useful in many circumstances. (e.g. authorities believes you've pirated some movie/downloaded documents critical of the government/etc.; knowing that the drive does in fact contain enough data to account for that and that it was written around the time of the alleged download could easily be used as evidence against you).

Since the low-level storage device driver shouldn't care whether or not the filesystem that's stored on it is encrypted or not, that's enough of a reason to avoid TRIM. Maybe you could allow it as an off-by-default user-settable option, if you feel that doing extra work for no benefit is worthwhile.

Brendan wrote:
That's like saying that one computer somewhere might not have a network card, so no OS should ever support any networking. An OS should support SMART (in case the hardware does provide the information) and the OS should at least make use any standardised data that is provided (and should probably also have optional modules or something, to be able to support any "vendor specific" data too).


You've missed the point. SMART data is useless in the general case. All it gives you is (up to) 255 numbers with no standard defined meaning. While there is a "common" list of definitions for some of those numbers, you have no way of confirming whether a particular device is using those definitions. Even if the value that "commonly" means "Reported Uncorrectable Errors" is skyrocketing, you cannot be certain that that is what it actually means. Sure, err on the side of caution and display a warning, but unless you have the resources to build a database of every known storage device and the actual meanings of the SMART data, that's about all you can do.

Brendan wrote:
Note that tsdnz is targeting "cloud" - a large number of mostly unsupervised computers containing "enterprise class" hardware

Which means RAID controllers that almost certainly don't report SMART data (since it's impossible to do correctly; how do you report SMART data from an individual disk in an array that's presented to the OS as one volume?).

Brendan wrote:
Handling all possible errors that occur is 100% possible


Nope. As I've said, you can't even with 100% certainty say whether an error even exists or whether the SMART value you're assuming indicates an error actually means something different on this particular device. Therefore, handling "all possible errors" falls flat at the first hurdle.

Brendan wrote:
The low level driver should keep track of pending requests; so that NCQ can work (and be abstracted), and to reduce latency and improve performance (avoid the additional time/overhead of negotiating with a higher level layer every single time one transfer completes and you want to start the next transfer). The low level driver needs to know about IO priorities to manage pending requests properly. The low level driver also needs to provide/uphold some sort of synchronisation model to make it possible. You can not shift it into a higher level (e.g. file sytem) unless you want to completely ruin performance.


At the hardware driver level, all you need is functionality like (but not limited to) "queue a read/write with this priority" and "flush all hardware caches in preparation for shutdown/disconnect" and some way of notifying the caller when their read/write has completed. A cancellation system is probably useful too. The hardware driver need know nothing about software-based caching, software command queuing (used when the hardware queue is at maximum length), process/thread I/O priorities (or even the existence of processes/threads, apart from the need ensure data is read/written from the correct address space), etc.

_________________
Image


Top
 Profile  
 
 Post subject: Re: SAS HDD Drive
PostPosted: Fri Aug 18, 2017 5:22 am 
Offline
Member
Member

Joined: Fri Aug 19, 2016 10:28 pm
Posts: 360
SATA is not the right interface probably, but with enterprise filesystem solutions running over FC and other SCSI, you want robustness against connection loss. This means both the storage and the network channel if you are implementing a clustered fs. And if you are going to support SATA anyway, and you want to create enterprise OS, you may as well support the rare transient disconnect. Of course this depends on the type of infrastructure as well. With hyperconverged/converged type of infrastructure you may not care that much, because each node has separate availability. But with SAN, you would prefer not to lose the applications in the entire cluster because of some correctable issue with the fibrechannel network. And yes, you should have redundant paths to the storage, but more software resilience is always desirable, unless it has expensive overhead.

How do you recover? You could always replay the last journal segment that has been successfully written out, as far as the metadata is concerned. Regarding the data, depends on if you purge the fs cache immediately after sending the request to the storage or retain it until the next periodic flush cache on the device, which you could perform every half a minute or so. In the latter case, you could replay the disk writes since the last flush. Doing so would be idempotent with respect to any completed writes.

I don't know how many devices support TRIM usefully. I am not aware of many details, but I have read about performance issues and implementation differences. But if the device is strictly non-conformant, the administrator can always disable the feature (or not enable it). Furthermore, obviously if the implementation is slow, this means that it is not a nop. And if it is nop, it is not slow. Besides, trim can be performed in batches, or even triggered manually (i.e. by fstrim on Linux) much after the actual block deallocation.

Regarding the effect, it is the same as the vendor reserved space on the SSD, which as we know must be useful. If it is not useful, then it wouldn't be spared by the manufacturer. And depending on the SSD class and the usage, the unallocated space may be comparable or greater to the reserved space, thus enhancing the SSD performance and longevity correspondingly. I expect that this will be directly related to the write amplification of the FTL, which is rumored to be (total space)/(reserved space + unallocated space).

On the practical benefit, since write amplification is garbage collection side-effect, and garbage collection can be performed in parallel or deferred, the reclaimed space will be relevant to the system performance only in sustained random write mode of operation (e.g. sustained until space overflow). That is why most tests and usage do not see the effect. But as with all things, the mileage will vary. For example, performing extract-transform-load of trace records into a database (such as those used by revenue assurance/data reconciliation solutions) produces a lot of random writes to the database indexes. The traces can be huge and arrive contiguously from things like switches in the telecoms, transactions of ATM, etc. So, I am pretty confident that in the extreme cases, the write amplification could be sensed. The question is whether the architect would choose to spend money on software or hardware to fix the problem, and what the relative costs, benefits and reliability of the solution would be.

Edit: Note that there could be other benefits for thin-provisioning of storage. Virtual storage (not only in VM, but storage controllers) could theoretically reclaim the space and make it available to other luns.


Top
 Profile  
 
 Post subject: Re: SAS HDD Drive
PostPosted: Fri Aug 18, 2017 6:41 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

mallard wrote:
Brendan wrote:
adding it to some kind of event log for administrators


How are you going to do that when you've just lost your primary/only storage device?

Sure, maybe you can send it over a network or to the printer or something, but not in the general case. I'd also question whether having all the "infrastructure" required to make this work stored in "non-swappable" memory so it can work if the primary storage device goes away is really an efficient use of system resources.


Event logs are always stored in memory (and then synced to disk) to avoid lots of tiny writes.

For a "mostly intended for a single local user" OS, you'd add the event to the event log in memory, then (later) try to update the copy on disk; and in the rare and relatively irrelevant case that this happens to also be the disk that was removed (and there's no RAID or any other redundancy) it won't work, but it will work for all of the normal cases that matter (e.g. it wasn't the OS's primary/only storage device) and therefore it's highly important (despite not working properly for your irrelevant case).

For a "peer to peer distributed" OS (like mine), there is no "primary/only storage device" - there's just local copies/caches of a (logical, not physical) global file system. For a "cloud OS" (like the original poster's), there's probably some kind of "management computer" on the network (that is the only computer that administrators use for administration/management tasks) and events from hardware errors wouldn't be stored on a local disk at all.

mallard wrote:
Brendan wrote:
There's no way for an OS to determine if a SATA device is "user unpluggable" on its own.


That's true. However, eSATA usage is fairly uncommon; the overwhelming majority of "user unpluggable" devices will be USB (iSCSI should probably also be considered "user unpluggable" since network outages happen). That means a simple heuristic like "has the OS ever seen this device being hot-plugged?" will cover 99.99% of SATA devices. The performance penalty of treating every device as though it might disappear any moment is too severe to be a viable option anywhere you don't absolutely have to.


Almost all of the servers here have "hot plug drive bays" in the front of their cases. Most of them are SATA (the oldest ones are some sort of "parallel SCSI"). I suspect you're fixated on cheap mobile devices (laptops, etc).

mallard wrote:
Brendan wrote:
No; you're misinformed.


All benchmarks of modern SSDs I've seen show little (no more than about 5%, within the realm of experimental error) to no benefit to using TRIM. An example.


Those benchmarks show exactly what I'd expect to see - when the drive is new ("fresh state") TRIM makes no difference (because the drive has a huge amount of "must be free because they've never been used" blocks), and as the disk becomes more and more used TRIM always improves performance.

From those benchmarks (the difference between "no TRIM,used state" and "TRIM enabled, used state", for every single graph on that page) it's obvious that TRIM improves performance by about 10%.

mallard wrote:
Brendan wrote:
Who cares? That metadata is worth nothing and could be published on a web site for the entire world to see without causing a security problem.


If your response to a confidential metadata leak is "Who cares?" I'm not going to take anything you say about security seriously. Knowing this sort of metadata can be used to answer questions like "how much actual data does this encrypted device store?" or "when was the last file updated?", etc. Those answers can be very useful in many circumstances. (e.g. authorities believes you've pirated some movie/downloaded documents critical of the government/etc.; knowing that the drive does in fact contain enough data to account for that and that it was written around the time of the alleged download could easily be used as evidence against you).


Surely you'd want to use TRIM to make sure that all the terrorists (who desperately want to know how much free space you have, and all have physical access to your hard drive for some reason) can tell that there's lots of free space and that you couldn't be guilty of pirating a movie.

mallard wrote:
Since the low-level storage device driver shouldn't care whether or not the filesystem that's stored on it is encrypted or not, that's enough of a reason to avoid TRIM. Maybe you could allow it as an off-by-default user-settable option, if you feel that doing extra work for no benefit is worthwhile.


The low level storage driver should provide support for TRIM, and let the file system decide if/when to use it. That way all sane file systems will use it, and file systems designed by deluded "tin-foil-hat-wearing crackpots" can suffer the consequences of not using it.

mallard wrote:
Brendan wrote:
That's like saying that one computer somewhere might not have a network card, so no OS should ever support any networking. An OS should support SMART (in case the hardware does provide the information) and the OS should at least make use any standardised data that is provided (and should probably also have optional modules or something, to be able to support any "vendor specific" data too).


You've missed the point. SMART data is useless in the general case. All it gives you is (up to) 255 numbers with no standard defined meaning. While there is a "common" list of definitions for some of those numbers, you have no way of confirming whether a particular device is using those definitions. Even if the value that "commonly" means "Reported Uncorrectable Errors" is skyrocketing, you cannot be certain that that is what it actually means. Sure, err on the side of caution and display a warning, but unless you have the resources to build a database of every known storage device and the actual meanings of the SMART data, that's about all you can do.


All of the important attributes are used the same by all manufacturers, and you're claiming that SMART is always 100% useless because of a few obscure minor/rare/irrelevant corner cases.

mallard wrote:
Brendan wrote:
Note that tsdnz is targeting "cloud" - a large number of mostly unsupervised computers containing "enterprise class" hardware

Which means RAID controllers that almost certainly don't report SMART data (since it's impossible to do correctly; how do you report SMART data from an individual disk in an array that's presented to the OS as one volume?).


It might mean RAID controllers (and in that case there's no point writing a AHCI driver at all). It might also mean there's full blown SAN and the SATA drive is only used for booting the computer (up until it's able to get files from network/SAN), and that if a computer fails to boot for any reason (e.g. hard drive failed) it switches to an alternative computer instead (fail-over). It might mean that the local hard drive is only used as cache (to reduce network traffic) where failure just means worse performance (and doesn't mean data loss).

mallard wrote:
Brendan wrote:
Handling all possible errors that occur is 100% possible


Nope. As I've said, you can't even with 100% certainty say whether an error even exists or whether the SMART value you're assuming indicates an error actually means something different on this particular device. Therefore, handling "all possible errors" falls flat at the first hurdle.


Like I already said, SMART has nothing to do with errors that have occurred (it's about predicting errors that have not occurred yet).

You send a command to the drive (e.g. asking it to read or write some data), and then you either get a time-out (drive didn't respond) or status saying if the command worked or failed. This is how you find out there's an error. You can handle 100% of these errors, even if you don't support SMART, and even if SMART doesn't exist.

mallard wrote:
Brendan wrote:
The low level driver should keep track of pending requests; so that NCQ can work (and be abstracted), and to reduce latency and improve performance (avoid the additional time/overhead of negotiating with a higher level layer every single time one transfer completes and you want to start the next transfer). The low level driver needs to know about IO priorities to manage pending requests properly. The low level driver also needs to provide/uphold some sort of synchronisation model to make it possible. You can not shift it into a higher level (e.g. file sytem) unless you want to completely ruin performance.


At the hardware driver level, all you need is functionality like (but not limited to) "queue a read/write with this priority" and "flush all hardware caches in preparation for shutdown/disconnect" and some way of notifying the caller when their read/write has completed. A cancellation system is probably useful too. The hardware driver need know nothing about software-based caching, software command queuing (used when the hardware queue is at maximum length), process/thread I/O priorities (or even the existence of processes/threads, apart from the need ensure data is read/written from the correct address space), etc.


The problem is that a file system says "write W, X, Y and Z; but make sure that W is written successfully before you write Y (because Y is the metadata for W), and make sure that X is written successfully before you write before Z (because Z is metadata for Y); but to improve performance feel free to write W and Y in any order you like, and feel free to write X and Z in any order you like".

At a minimum this requires a "global sync"; where the file system splits it into "write W and Y in any order; then sync" followed by "write X and Z in any order; then sync;". Global sync is nasty (especially when multiple things are using different partitions) because it reduces the amount of "in flight requests" that the hardware driver is aware of and reduces the amount of optimisation the driver can do, and because it's much harder to get the file system code right.

Better is to attach dependencies to each request - e.g. "write W (depends on nothing), X (depends on W), Y (depends on nothing) and Z (depends on Y)" and let the hardware driver optimise the order; partly because this is much more natural for file system code to deal with (e.g. "update the data for file foo.txt then the directory entry for foo.txt" plus "update the data for bar.txt and the directory entry for bar.txt", and let the hardware driver do both groups in any order and/or in parallel because it's not hampered by "global sync").


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: SAS HDD Drive
PostPosted: Fri Aug 18, 2017 8:06 am 
Offline
Member
Member

Joined: Tue May 13, 2014 3:02 am
Posts: 280
Location: Private, UK
Brendan wrote:
Event logs are always stored in memory (and then synced to disk) to avoid lots of tiny writes.

For a "mostly intended for a single local user" OS, you'd add the event to the event log in memory, then (later) try to update the copy on disk; and in the rare and relatively irrelevant case that this happens to also be the disk that was removed (and there's no RAID or any other redundancy) it won't work, but it will work for all of the normal cases that matter (e.g. it wasn't the OS's primary/only storage device) and therefore it's highly important (despite not working properly for your irrelevant case).


Having the primary storage device fail is not "irrelevant". If any storage device fails, that's by far the most likely. SSDs have a common, nasty, failure mode where they simply "cease to exist" without warning. Indistinguishable from an unplugging. Apart from that, booting from USB is very common these days, especially when people are "trying out" a new OS. In those cases, your event log has nowhere to go.

Brendan wrote:
For a "peer to peer distributed" OS (like mine), there is no "primary/only storage device" - there's just local copies/caches of a (logical, not physical) global file system.


How does one first set up this OS? Surely they'll be a state during the early set-up process where the "distributed" OS only has one node, before others join the "cluster"? What happens if one node loses connectivity? Does it have a "degraded" state where it might still be generating data that's not (at that time) being replicated anywhere?

Brendan wrote:
For a "cloud OS" (like the original poster's), there's probably some kind of "management computer" on the network


I believe I covered that case when I said that you could send your event log out over the network. However, the "infrastructure" problem still exists, since you have to keep the entire network stack, configuration, "logging daemon", etc. in non-swappable memory at all times. If you've got plenty of memory and have written efficient code, that might not be too much of a problem, but it's an awful lot of stuff to "special case".

Brendan wrote:
Almost all of the servers here have "hot plug drive bays" in the front of their cases. Most of them are SATA (the oldest ones are some sort of "parallel SCSI").


For servers, you have nothing to worry about; no competent server administrator is going to unplug a storage device without first "unmounting" it (or they won't do it twice at least!). Many such bays even have electromechanical locks to prevent it.

Hot-unplugging at a bad time is only a serious concern for user systems.

Brendan wrote:
From those benchmarks (the difference between "no TRIM,used state" and "TRIM enabled, used state", for every single graph on that page) it's obvious that TRIM improves performance by about 10%.


The actual percentage improvement for each graph is is: 9.7%, 8.8%, 2.3% and 1.4%. The first two are maybe "about 10%", but the other two are far smaller. While averaging percentages is not really mathematically sensible, the overall improvement is probably around or below 5%. Considering the downsides of TRIM, it's barely worthwhile. Spending an extra $10 on a slightly more expensive SSD will likely give better results than messing around with TRIM.

Brendan wrote:
the terrorists


What "terrorists"? I specifically mentioned "authorities". Your statements about security/privacy make it pretty clear that no sensible person should ever trust your code.

Brendan wrote:
deluded "tin-foil-hat-wearing crackpots"


There's very little reason not to have your filesystem encrypted in 2017. Unless you really need absolute maximum performance (the performance hit of a well-implemented encryption layer is of the order 1-2%) or have critical data that's not backed up (in which case, you're an idiot), there's no benefit to storing data in plaintext.

Brendan wrote:
you're claiming that SMART is always 100% useless


I specifically said it's best to "err on the side of caution and display a warning". That's explicitly not claiming that it's "always 100% useless". Don't put words in my mouth (on my fingers?). It's a nice idea and a useful indicator that something might be wrong. Beyond that it's "half a standard" and impossible to get "concrete answers" from.

Brendan wrote:
Like I already said, SMART has nothing to do with errors that have occurred (it's about predicting errors that have not occurred yet).


You already said? When? Here's what you actually said, for reference:

Brendan wrote:
handles SMART and all possible kinds of errors and drive failures


If a failure has already occurred, it's by definition too late to "handle" it. Telling a user "your hard drive has failed" isn't "handling" anything; it's just admitting (inevitable) defeat.

Brendan wrote:
The problem is that a file system says "write W, X, Y and Z; but make sure that W is written successfully before you write Y (because Y is the metadata for W), and make sure that X is written successfully before you write before Z (because Z is metadata for Y); but to improve performance feel free to write W and Y in any order you like, and feel free to write X and Z in any order you like".


There's a mistake here; "free to write W and Y in any order you like" contradicts "make sure that W is written successfully before you write Y"... I think you've swapped X and Y in the second half of the paragraph; "X is written successfully before you write before Z (because Z is metadata for Y)".

I believe a priority system handles this; you simply set the "data" writes to be a higher priority than the "metadata" writes. Both "data" writes have the same priority (so can be written in the most performant order), as do both "metadata" writes.

You do end up with an overly-strict ordering in which all queued "data" writes get written before any "metadata" writes. To solve the potential problem that in a busy system "metadata" writes may never happen, you can have a system that increases the priority of writes based on the number of "queue cycles" they've been waiting for. Assuming that you never issue a write to metadata before the write to data (i.e. it's never been waiting for more queue cycles than its corresponding data write), the ordering requirements still hold.

Simple, straightforward and no need for "global sync" or an overly complicated separation-of-concerns violating hardware driver.

_________________
Image


Top
 Profile  
 
 Post subject: Re: SAS HDD Drive
PostPosted: Fri Aug 18, 2017 10:44 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

mallard wrote:
Brendan wrote:
Event logs are always stored in memory (and then synced to disk) to avoid lots of tiny writes.

For a "mostly intended for a single local user" OS, you'd add the event to the event log in memory, then (later) try to update the copy on disk; and in the rare and relatively irrelevant case that this happens to also be the disk that was removed (and there's no RAID or any other redundancy) it won't work, but it will work for all of the normal cases that matter (e.g. it wasn't the OS's primary/only storage device) and therefore it's highly important (despite not working properly for your irrelevant case).


Having the primary storage device fail is not "irrelevant". If any storage device fails, that's by far the most likely. SSDs have a common, nasty, failure mode where they simply "cease to exist" without warning. Indistinguishable from an unplugging. Apart from that, booting from USB is very common these days, especially when people are "trying out" a new OS. In those cases, your event log has nowhere to go.


This part of the conversation is about unexpected hot-plug under load. I don't think any sane person can reasonably expect an OS to handle "end user ripped out the drive containing the OSs primary/only partition while the OS is running and under load" gracefully, and I don't think any sane end user would need an event log to help them figure out what went wrong in this case.

mallard wrote:
Brendan wrote:
For a "peer to peer distributed" OS (like mine), there is no "primary/only storage device" - there's just local copies/caches of a (logical, not physical) global file system.


How does one first set up this OS? Surely they'll be a state during the early set-up process where the "distributed" OS only has one node, before others join the "cluster"? What happens if one node loses connectivity? Does it have a "degraded" state where it might still be generating data that's not (at that time) being replicated anywhere?


If I answered these questions it'd drag the conversation even further off topic; and I know that the only reason you're asking is because you want to focus on rare corner case/s in an attempt to ignore everything that actually matters (in the same way that you've focused on rare corner cases in an attempt to ignore the fact that TRIM is useful, and in the same way that you've focused on rare corner cases in an attempt to ignore the fact that SMART is useful).

mallard wrote:
Brendan wrote:
For a "cloud OS" (like the original poster's), there's probably some kind of "management computer" on the network


I believe I covered that case when I said that you could send your event log out over the network. However, the "infrastructure" problem still exists, since you have to keep the entire network stack, configuration, "logging daemon", etc. in non-swappable memory at all times. If you've got plenty of memory and have written efficient code, that might not be too much of a problem, but it's an awful lot of stuff to "special case".


For a commercial quality modern OS (typically million of lines of code); setting a "don't swap" flag on 3 things is like a snowflake sitting on the top of an iceburg.

mallard wrote:
Brendan wrote:
Almost all of the servers here have "hot plug drive bays" in the front of their cases. Most of them are SATA (the oldest ones are some sort of "parallel SCSI").


For servers, you have nothing to worry about; no competent server administrator is going to unplug a storage device without first "unmounting" it (or they won't do it twice at least!). Many such bays even have electromechanical locks to prevent it.


These drive bays are mostly designed for (software or hardware) RAID; where there's no reason to unmount a file system before removing a drive. None of my servers have electromechanical locks.

mallard wrote:
Hot-unplugging at a bad time is only a serious concern for user systems.


Code:
Concern = probability it will happen * severity of consequences if it happens


For user systems the probably is high and the severity is low. For servers the probability is low and the severity is high. It's a concern either way.

mallard wrote:
Brendan wrote:
the terrorists


What "terrorists"? I specifically mentioned "authorities". Your statements about security/privacy make it pretty clear that no sensible person should ever trust your code.


If you're really worried that men in black suits are going to confiscate your computer and take it back to their expensive lab for analysis, you can still use TRIM, but create a large dummy file (e.g. copy a few hundred MiB from "/dev/random") to trick them into thinking you are guilty of having a pirated copy of "Finding Nemo" on the drive.

mallard wrote:
There's very little reason not to have your filesystem encrypted in 2017. Unless you really need absolute maximum performance (the performance hit of a well-implemented encryption layer is of the order 1-2%) or have critical data that's not backed up (in which case, you're an idiot), there's no benefit to storing data in plaintext.


What is the data? Do you think (e.g.) Wikipedia encrypts all of the (publicly available and publicly editable) content for their web site (in case "The Authorities" want to spend millions of $$ to analyse Wikipedia's hard drives, to find out who Edmund Leslie Newcombe was)?

Some data matters (e.g. bank account passwords), some data does not (e.g. how much free space you have), and the amount of security needs to be appropriate (not "too little security for something that needs more", and not "too much security for something that doesn't need it").

Note: For my computers (which do contain things like bank account passwords), I have never bothered with encryption. I don't even bother with log-in passwords for the Windows machines (they boot straight to "admin logged in" without any prompt). Instead I use physical security (if you can get into my computer room without significant blood loss, it's because I trust you ;) ).

mallard wrote:
Brendan wrote:
Like I already said, SMART has nothing to do with errors that have occurred (it's about predicting errors that have not occurred yet).


You already said? When?


From here (emphasis added):
"Handling all possible errors that occur is 100% possible; even if "handling" just means adding something to an event log and telling other parts of the OS (file systems, etc) that something bad happened. This has nothing to do with SMART (which is mostly about predicting failures before they occur so that you can prevent problems from occurring)."

mallard wrote:
Here's what you actually said, for reference:

Brendan wrote:
handles SMART and all possible kinds of errors and drive failures


You've deliberately taken a tiny piece out of context. The original was:
  • handles SMART and all possible kinds of errors and drive failures (including communicating with some kind of hardware monitoring tool for administrators, including "early warning that drive will fail soon")

If you're struggle with English, this can be rephrased more clearly as:
  • handles SMART; and also handles all possible kinds of errors; and also handles drive failures

It can also be re-ordered without changing its meaning, like this:
  • handles drive failures; and also handles all possible kinds of errors; and also handles SMART

mallard wrote:
If a failure has already occurred, it's by definition too late to "handle" it. Telling a user "your hard drive has failed" isn't "handling" anything; it's just admitting (inevitable) defeat.


Wrong. Handling an error means things like:
  • Attempting to work-around it (retries, etc); and/or
  • Informing user/admin (either directly or via. logging) what happened and providing them any relevant information they might want; and/or
  • Notifying other code (file system, swap manager, RAID layer, ...) so that they can do whatever makes sense for them; and/or
  • Confining the problem (e.g. making sure other storage devices that the driver is responsible for aren't effected)

For comparison; not handling errors means that (e.g.) an application calls the "read()" C function and locks up and nobody can figure out why (because the AHCI driver got an error from the controller and didn't tell the file system about the error, so the file system didn't tell the scheduler to unblock the task or tell the C library, and the C library never returns with "errno" set to indicate there was a problem).

mallard wrote:
Brendan wrote:
The problem is that a file system says "write W, X, Y and Z; but make sure that W is written successfully before you write Y (because Y is the metadata for W), and make sure that X is written successfully before you write before Z (because Z is metadata for Y); but to improve performance feel free to write W and Y in any order you like, and feel free to write X and Z in any order you like".


There's a mistake here; "free to write W and Y in any order you like" contradicts "make sure that W is written successfully before you write Y"... I think you've swapped X and Y in the second half of the paragraph; "X is written successfully before you write before Z (because Z is metadata for Y)".


You're right (typo - sorry).

mallard wrote:
I believe a priority system handles this; you simply set the "data" writes to be a higher priority than the "metadata" writes. Both "data" writes have the same priority (so can be written in the most performant order), as do both "metadata" writes.


Imagine that the file system noticed that a process is reading a large file in sequential order and decides to prefetch the next part of the file in the background ("IO priority = very low"); but that prefetching was too slow so after a little while the process need that data immediately, so the file system either changes the original request to "priority = high" or cancels the original request and issues a new request for the same data (with "priority = high").

While this is happening 1234 other processes (and the kernel's swap space code and whatever else) are hammering the daylights out of the same disk drive.

Now see if you can explain how a priority system would handle synchronising hundreds of requests that are "in flight" while also still being useful for determining how important each request is (and making sure ""high priority swap partition read" happens before "low priority metadata write").


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: SAS HDD Drive
PostPosted: Fri Aug 18, 2017 12:11 pm 
Offline
Member
Member

Joined: Thu Aug 13, 2015 4:57 pm
Posts: 384
Brendan wrote:
mallard wrote:
Except on very early model SSDs, TRIM varies from useless to dangerous. Competent SSDs internal garbage-collection routines are far more capable and TRIM often has a severe and unpredictable performance penalty (especially on devices that only support the non-queued version). In the best devices it's simply a no-op and in the worst it's buggy and corrupts your filesystem. Unless you've got the resources to test every SSD (family) on the market to work out the small minority of devices where it's correctly implemented and actually improves performance, it's best avoided.


No; you're misinformed.

It's impossible for SSD to know that a block is no longer being used unless the OS tells it that the block is no longer in use (via. TRIM); and impossible for SSD to do any garbage collection unless the OS uses TRIM.

The theory that you've misunderstood comes from a performance hack. The idea is that instead of using TRIM as soon as a block is no longer in use, it's faster for an OS to wait in the hope that the block can be reused/reallocated; and if the block is reused/reallocated then the TRIM can be skipped (and therefore performance can be improved). However, excessive use of this tactic ruins SSD performance for no reason (the number of blocks that aren't used and aren't TRIMed increases over time until the SSD thinks there's no free blocks at all, effectively crippling the SSD's ability to manage its blocks properly).

Can you elaborate on that "skipping TRIM improves performance" part? The whole point of TRIM is to improve performance. AFAIK some FS's don't do immediate TRIM either due to the feature lacking in the FS design or because they've decided it's better for performance to do it in batches, possibly during idle time. AFAIK some SSD models took way too long to complete the TRIM commands.

As for the "it's _impossible_ for SSD to know which blocks aren't in use", that's not true. The SSD doesn't know which _LBA_ is in use, but blocks it can easily know. For instance the SSD can have 10% of the storage reserved and not exposed (eg. 550GiB SSD is labeled and sold as 500GiB), and when the OS writes to some block, then a new block is taken from the reserved area and thus the old block is now _known_ to be free and can be moved into the reserved pool.

Of course if this feature of the SSD is good enough then it might make TRIM irrelevant, because the SSD can do it internally whenever it pleases, though it does add a bit of extra cost.


Top
 Profile  
 
 Post subject: Re: SAS HDD Drive
PostPosted: Fri Aug 18, 2017 12:22 pm 
Offline
Member
Member

Joined: Thu Aug 13, 2015 4:57 pm
Posts: 384
mallard wrote:
Brendan wrote:
adding it to some kind of event log for administrators


How are you going to do that when you've just lost your primary/only storage device?

Umm, I'd consider the RAM to be more "primary" than the disk, however it's volatile. So you can easily log it and keep it in memory, which means the users can actually know what's wrong, instead of BSOD.

mallard wrote:
Brendan wrote:
There's no way for an OS to determine if a SATA device is "user unpluggable" on its own.


That's true. However, eSATA usage is fairly uncommon; the overwhelming majority of "user unpluggable" devices will be USB (iSCSI should probably also be considered "user unpluggable" since network outages happen). That means a simple heuristic like "has the OS ever seen this device being hot-plugged?" will cover 99.99% of SATA devices. The performance penalty of treating every device as though it might disappear any moment is too severe to be a viable option anywhere you don't absolutely have to.

I thought "normal" SATA also allows hotswap, so this isn't about eSATA. And I'd expect a _good_ OS to support hotswap, including allowing me to swap a broken (or soon to be broken according to SMART) to be swapped without issues.

So in practice for me, all the disks might have never been hot-plugged, and once one is reported to have issues I will hotswap it, so your heuristic is completely wrong. I'm never going to hotswap unless there's a reason to do so, which will easily mislead your heuristic. Also I dislike non-deterministic operation from an OS, the OS must support hotswap.

Besides, there's no real performance penalty.

mallard wrote:
Brendan wrote:
Note that tsdnz is targeting "cloud" - a large number of mostly unsupervised computers containing "enterprise class" hardware

Which means RAID controllers that almost certainly don't report SMART data (since it's impossible to do correctly; how do you report SMART data from an individual disk in an array that's presented to the OS as one volume?).

I'm not a huge fan of HW RAID, it adds extra issues to RAID, so I prefer SW RAID in most cases. With SW RAID the SMART info can be used.

Note also that I'm not necessarily planning on using RAID, but rather some type of distributed block storage over multiple disks, with redundancy. Which makes HW RAID useless for me.


Top
 Profile  
 
 Post subject: Re: SAS HDD Drive
PostPosted: Fri Aug 18, 2017 12:36 pm 
Offline
Member
Member

Joined: Thu Aug 13, 2015 4:57 pm
Posts: 384
mallard wrote:
Brendan wrote:
The problem is that a file system says "write W, X, Y and Z; but make sure that W is written successfully before you write Y (because Y is the metadata for W), and make sure that X is written successfully before you write before Z (because Z is metadata for Y); but to improve performance feel free to write W and Y in any order you like, and feel free to write X and Z in any order you like".

I believe a priority system handles this; you simply set the "data" writes to be a higher priority than the "metadata" writes. Both "data" writes have the same priority (so can be written in the most performant order), as do both "metadata" writes.


You can exploit the priority system to deal with dependencies, but I'd avoid that. You're conflating the two separate things.

For me, optimistically, for HDDs the driver might write lower priority stuff first if it won't impact higher priority latency. Similar to process priorities, a higher priority may be blocked and thus lower priority is allowed to run. Priorities aren't dependencies and shouldn't be treated as such.


Top
 Profile  
 
 Post subject: Re: SAS HDD Drive
PostPosted: Fri Aug 18, 2017 12:49 pm 
Offline
Member
Member

Joined: Thu Aug 13, 2015 4:57 pm
Posts: 384
Brendan wrote:
mallard wrote:
There's very little reason not to have your filesystem encrypted in 2017. Unless you really need absolute maximum performance (the performance hit of a well-implemented encryption layer is of the order 1-2%) or have critical data that's not backed up (in which case, you're an idiot), there's no benefit to storing data in plaintext.


What is the data? Do you think (e.g.) Wikipedia encrypts all of the (publicly available and publicly editable) content for their web site (in case "The Authorities" want to spend millions of $$ to analyse Wikipedia's hard drives, to find out who Edmund Leslie Newcombe was)?

Sure, don't encrypt irrelevant data, or publicly available data. But in the real world, how do you make that assessment? Making that assessment is likely more expensive than the 1-2% performance hit of encrypting everything.

I don't know a single large company that doesn't by policy encrypt every single laptop, no matter how mundane the info. And AFAIK they all also encrypt all smart phones if there's any connection to the company (eg. email).

Also, as an aside, Wikipedia does encrypt all communication these days. I don't really see any reason to not encrypt..


Top
 Profile  
 
 Post subject: Re: SAS HDD Drive
PostPosted: Sat Aug 19, 2017 2:05 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

LtG wrote:
Brendan wrote:
It's impossible for SSD to know that a block is no longer being used unless the OS tells it that the block is no longer in use (via. TRIM); and impossible for SSD to do any garbage collection unless the OS uses TRIM.

The theory that you've misunderstood comes from a performance hack. The idea is that instead of using TRIM as soon as a block is no longer in use, it's faster for an OS to wait in the hope that the block can be reused/reallocated; and if the block is reused/reallocated then the TRIM can be skipped (and therefore performance can be improved). However, excessive use of this tactic ruins SSD performance for no reason (the number of blocks that aren't used and aren't TRIMed increases over time until the SSD thinks there's no free blocks at all, effectively crippling the SSD's ability to manage its blocks properly).

Can you elaborate on that "skipping TRIM improves performance" part? The whole point of TRIM is to improve performance. AFAIK some FS's don't do immediate TRIM either due to the feature lacking in the FS design or because they've decided it's better for performance to do it in batches, possibly during idle time. AFAIK some SSD models took way too long to complete the TRIM commands.


Assume that each block is in one of 3 states:
  • Used by file system
  • Not used by file system (but still used as far as SSD knows)
  • Free

Now build a state machine diagram:

Code:
                                 __________
     ______                     |          |            ______
    |      |---(dealloc)------->| NOT USED |--(TRIM)-->|      |
    |      |<--(alloc)----------|__________|           |      |
    | USED |                                           | FREE |
    |      |---------------(dealloc+TRIM)------------->|      |
    |______|<--------------(alloc+write)---------------|      |
                                                       |______|


This creates 3 possible strategies:
  • TRIM never used. After each block is used once; blocks only move between the "used" and "not used" states. Bad for performance and worse for wear levelling because SSD (eventually) has a lot less free blocks to work with.
  • TRIM always used on dealloc. Blocks only ever move between the "used" and "free" states. Not great for performance because of the overhead of TRIM on every dealloc.
  • TRIM postponed somehow. In this case you avoid the "SSD has no free blocks left" problem, and also avoid the overhead of TRIM whenever a block goes from the "not used" to "used" state. As far as I know all modern OSs use a variation of this.

LtG wrote:
As for the "it's _impossible_ for SSD to know which blocks aren't in use", that's not true. The SSD doesn't know which _LBA_ is in use, but blocks it can easily know. For instance the SSD can have 10% of the storage reserved and not exposed (eg. 550GiB SSD is labeled and sold as 500GiB), and when the OS writes to some block, then a new block is taken from the reserved area and thus the old block is now _known_ to be free and can be moved into the reserved pool.

Of course if this feature of the SSD is good enough then it might make TRIM irrelevant, because the SSD can do it internally whenever it pleases, though it does add a bit of extra cost.


What I meant is that it's impossible for the SSD to know the difference between the "used" state and the "not used" state. As far as SSD is concerned there are only 2 states ("used or not used" and "free").

Now imagine a device with 12345 blocks, that advertises itself as 12000 blocks and uses the remaining 345 blocks for wear levelling. If there are 6000 blocks in the "used" state and 6345 blocks in the "free" state (and no blocks in the "not used" state because the OS always does TRIM immediately), the SSD can spread wear across any of those 6345 blocks. If there are 6000 blocks in the "used" state, 60000 blocks in the "not used" state (because OS never uses TRIM) and 345 blocks in the "free" state, the SSD can only spread wear across 345 blocks. Of course after a while (as blocks fail) the amount of blocks available reduces further.

This is why I doubt TRIM will ever become obsolete (at least not when wear levelling is still needed) - without TRIM they'd have to increase the amount of spare blocks (and increase the cost of the device) to achieve the same longevity at the same "advertised capacity".


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: SAS HDD Drive
PostPosted: Sat Aug 19, 2017 2:32 am 
Offline
Member
Member

Joined: Sun Jun 16, 2013 4:09 am
Posts: 333
Hi All, Almost feels like the topic has gone off course a little. Very funny. LOL

The SATA drive turned up.
Easy to swap in the server.
Setup RAID controller with one Virtual Drive, just using the SATA disk that was just installed.
Reboot the server.
Check the PCI list. Fingers half crossed.
NO SATA, bugger.

Looks like I am going to have to find a SATA controller that add's the drives to the PCI list.

Ali


Top
 Profile  
 
 Post subject: Re: SAS HDD Drive
PostPosted: Sat Aug 19, 2017 9:57 am 
Offline
Member
Member

Joined: Mon Mar 25, 2013 7:01 pm
Posts: 5137
tsdnz wrote:
Looks like I am going to have to find a SATA controller that add's the drives to the PCI list.

Doesn't your server have a built-in SATA controller? That should work fine, as long as it's correctly enabled in the BIOS.

And what the heck is a "PCI list"?


Top
 Profile  
 
 Post subject: Re: SAS HDD Drive
PostPosted: Sat Aug 19, 2017 10:22 am 
Offline
Member
Member
User avatar

Joined: Sun Jul 14, 2013 6:01 pm
Posts: 442
Octocontrabass wrote:
And what the heck is a "PCI list"?


http://wiki.osdev.org/PCI

Quote:
PCI Device Structure
The PCI Specification defines the organization of the 256-byte Configuration Space registers and imposes a specific template for the space. Figures 2 & 3 show the layout of the 256-byte Configuration space. All PCI compliant devices must support the Vendor ID, Device ID, Command and Status, Revision ID, Class Code and Header Type fields. Implementation of the other registers is optional, depending upon the devices functionality.


The following field descriptions are common to all Header Types:

Device ID: Identifies the particular device. Where valid IDs are allocated by the vendor.
Vendor ID: Identifies the manufacturer of the device. Where valid IDs are allocated by PCI-SIG (the list is here) to ensure uniqueness and 0xFFFF is an invalid value that will be returned on read accesses to Configuration Space registers of non-existent devices.


type lspci to see on your computer. typically you will see soemthing like this:

Quote:
00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] RS880 Host Bridge
00:02.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] RS780 PCI to PCI bridge (ext gfx port 0)
00:09.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] RS780/RS880 PCI to PCI bridge (PCIE port 4)
00:0a.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] RS780/RS880 PCI to PCI bridge (PCIE port 5)
00:11.0 SATA controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 SATA Controller [AHCI mode] (rev 40)
00:12.0 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB OHCI0 Controller
00:12.2 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB EHCI Controller
00:13.0 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB OHCI0 Controller
00:13.2 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB EHCI Controller
00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD/ATI] SBx00 SMBus Controller (rev 42)
00:14.1 IDE interface: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 IDE Controller (rev 40)
00:14.2 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] SBx00 Azalia (Intel HDA) (rev 40)
00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 LPC host controller (rev 40)
00:14.4 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] SBx00 PCI to PCI Bridge (rev 40)
00:14.5 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB OHCI2 Controller
00:15.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] SB700/SB800/SB900 PCI to PCI bridge (PCIE port 0)
00:16.0 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB OHCI0 Controller
00:16.2 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB EHCI Controller
00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 10h Processor HyperTransport Configuration
00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 10h Processor Address Map
00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 10h Processor DRAM Controller
00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 10h Processor Miscellaneous Control
00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 10h Processor Link Control
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Barts XT [Radeon HD 6870]
01:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Barts HDMI Audio [Radeon HD 6800 Series]
02:00.0 USB controller: Etron Technology, Inc. EJ168 USB 3.0 Host Controller (rev 01)
03:00.0 Ethernet controller: Qualcomm Atheros AR8151 v2.0 Gigabit Ethernet (rev c0)
05:00.0 FireWire (IEEE 1394): VIA Technologies, Inc. VT6315 Series Firewire Controller (rev 01)


with this, you can see what hardwares do you have, and your drivers can handle the supported ones

_________________
Operating system for SUBLEQ cpu architecture:
http://users.atw.hu/gerigeri/DawnOS/download.html


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 39 posts ]  Go to page Previous  1, 2, 3  Next

All times are UTC - 6 hours


Who is online

Users browsing this forum: SemrushBot [Bot] and 160 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group