Schol-R-LEA wrote:
Shall we start effusively praising ACPI now? 'Cos I would say that ACPI was made for this day.
I'm not a fan of ACPI at all; I'm just trying to point out that it's always easy to cry about things that you don't have to design yourself.
Hi Brendan,
Brendan wrote:
Korona wrote:
Also: How do you determine that your host does not contain legacy devices if there is no discovery mechanism for them?
Nobody is saying "have no static tables" or "have no discovery mechanism". We are only saying "have no AML".
Actually tom9876543 was saying just that: His premises were that interrupt routing should be static (as in "not configurable via software or fimware") and that PCI enumeration ought to be enough for everything. As such most of my post was responding to that. I agree with most of your technical points; I'm just not sure I trust hardware companies to settle on a small set of generic standards for things like fans and power buttons.
Brendan wrote:
There is only ever one LPC chip and it doesn't make sense to have (e.g.) multiple pairs of PIC chips or multiple PIT chips or... No software has no reason to care where these devices are (you only ever need to care about the legacy IO ports that they used decades ago). The legacy devices do not have any power management, and are so slow and so simple that they consume almost no power and therefore will never need power management. By definition, the legacy devices can not be changed (if you decided to change a legacy device it would not be a legacy device anymore). ACPI already has a simple present/not present flag for legacy devices and nothing else; so continuing to have a simple present/not present flag for legacy devices and nothing else can not cause problems that don't already exist.
I'm not so sure if there is only ever one LPC chip in modern systems. I could imagine that e.g. ATA emulation and PIC emulation are configured via different PCI functions. ACPI does define power management (mostly binary on/off PM) for some of the legacy chips though. You could of course define architectural PM for LPC chips to replace that without AML.
Brendan wrote:
Nonsense. Without AML, hardware manufacturers would have had to standardise the hardware, and we'd have a tiny number of simple devices (and some more information in static tables) to deal with. We'd also be able to use some parts (e.g. fan control) without touching or caring about completely unrelated things (e.g. laptop battery levels), which is something that can't be done with ACPI (which is "all or nothing" - either you enable "ACPI mode" or you don't). This alone would make it significantly easier for small OSs, because you'd be able to add support for what you want when you want to (instead of having to deal with many MiB of bloat when the only thing you care about is power button state and nothing else).
That is true if hardware vendors actually settled on a small number of standards for PM controls.
Brendan wrote:
Could you imagine if CPU designers said "LOL, we couldn't be bothered having a standard for paging structures, use AML to setup page tables, etc"? It would be completely retarded (compared to having a small number of "standard paging structures" to support). In exactly the same way; (e.g.) "LOL, use AML for CPU speed control" is completely retarded now (compared to having a small number of "standard CPU speed control MSRs" to support), and (e.g.) "LOL, use AML for fan speed control" is completely retarded now (compared to having a small number of "standard fan controller devices" to support) and (e.g.) "LOL, use AML to turn the computer off" is completely retarded now (compared to having a small number of "standard power supply control devices" to support).
Agreed.
Let's move on to tom9876543's post:
tom9876543 wrote:
Korona wrote:
MCFG is not sufficient to enable/disable/remap root complexes. Also: MCFG is not part of the PCI configuration space. It's an actually an ACPI table so I don't understand how that proves that ACPI is bloated. If your argument is "only the AML parts of ACPI are bad" then I already agreed with that.
Now you change the question from "how do you enumerate" to "how do you disable/enable/remap".
I would think this could be implemented by memory mapped io on the PCI Host Bridge.
I don't have to point out that this is a non sequitur, right? You didn't explain how enumeration should be done. Enabling/disabling/remapping are just additional problems that you have to solve. Of course you have to explain enumeration first. Besides: Letting the host bridge configure itself seems like a bad idea doesn't it?
tom9876543 wrote:
Korona wrote:
It's much easier to provide a proper configuration mechanism in software (i.e. the firmware/BIOS/EFI/whatever) than to make every hardware behave as a PCI device when it isn't actually a PCI device.
Is it "much easier"?? You haven't provided any evidence.
If it requires an 100,000 extra transistors to make VT-D a hardware implementation with standard PCI discovery and simple memory mapped io, I would suggest that is a very beneficial tradeoff.
WTF? I'm not sure if the entire VT-d implementation needs 100'000 transistors at all. You certainly don't want to increase hardware complexity because some random OS developer is too lazy to support a proper discovery/configuration mechanism.
tom9876543 wrote:
Korona wrote:
Also: How do you handle memory or CPU hotplug? Do you pretend that memory DIMMs and CPUs are part of the PCI?
As I stated in previous message, DIMMS have an SPD standard for identification. I'm not familiar with how exactly SPD works.
CPU hotplug should be managed by the PCI Host Bridge. The PCI Host Bridge can raise an interrupt when a CPU is removed / added.
PCI should have absolutely nothing to do with CPU hotplug. There is no reason to pretend that CPUs or other devices are PCI devices when they are not. PCI is a bus and not a generic device discovery/configuration mechanism. It does not make sense from a technical point of view and it makes even less sense from a logical point.
tom9876543 wrote:
Korona wrote:
Also: How do you determine that your host does not contain legacy devices if there is no discovery mechanism for them?
The IBM PC/AT has already has a discovery mechanism - it is the Bios Data Area (BDA). BDA (and/or EBDA) should be extended to provide a very very simple table for legacy device discovery. I'm not sure how UEFI works, but it could have a "Retrieve Legacy BDA / EBDA option" for discovering legacy devices.
The EBDA is a joke as a configuration mechanism. You certainly need some sort of table that allows generic identifiers and resource (e.g. decoded address and I/O space) records unless you want to be locked to PCI forever (until you replace your entire system bus device discovery/enumeration standard). Additionally you want some ability to relocate devices on the system bus, e.g. in case the BIOS fúcks up and does not reserve enough address space for a bridge. And remember that the BIOS fucking up is not a corner case; in fact it's the norm.
tom9876543 wrote:
I agree with Brendan's statements. In an ideal world almost everything would be managed by a standardised PCI interface.
Except that Brendan did not actually say that: Brendan was suggesting a sane configuration mechanism via extensible ACPI-like tables. Brendan wanted to get rid of AML and not of any enumeration/configuration tables at all. And he didn't suggest that everything (processors, memory, whatever) should be on the PCI and be enumerated/configured via the PCI configuration mechanism.