OSDev.org

The Place to Start for Operating System Developers
It is currently Thu Mar 28, 2024 8:44 am

All times are UTC - 6 hours




Post new topic Reply to topic  [ 35 posts ]  Go to page Previous  1, 2, 3  Next
Author Message
 Post subject: Re: PCI Configuration Process
PostPosted: Thu Sep 09, 2021 6:37 am 
Offline
Member
Member

Joined: Tue Apr 03, 2018 2:44 am
Posts: 401
bloodline wrote:
During PCI Bus Enumeration I can find the gfx board which Qemu emulates, and I'm going to use that to test writing PCI based driver code. I've found the Tech Doc for the QEmu graphics chip (the Cirrus Logic GD5446), http://www.vgamuseum.info/index.php/cpu ... 42814695bc

On page 67, this helpfully (though vaguely) lists all the available hardware regs, but I'm confused as to how to access them. The table give an I/O Port address, some of which appear to match up with the PCI Configuration register address/offsets. I'm having trouble picturing how this works as I'm used to hardware registers just sitting in the normal address space.

-Edit- I might have jumped the gun a bit here... So BAR0 is the framebuffer address. BAR1 is 0, but BAR2 is an address 32megs above the framebuffer... Is it possible the hardware registers are mapped here?


Looking at the spec, and based on what you're seeing, I suspect what's being emulated is the GD5446 revision B.

This maps BAR0 as the linear framebuffer, but based on section 9.2.7.3, this is mapped as 4MByte apertures into the framebuffer (the chip only supports 4MB of VRAM.)

[*] 0-4MB - No byte swapping.
[*] 4-8MB - 16-bit word byte swapping.
[*] 8-12MB - 32-bit work byte swapping.
[*] 12-16MB - Video aparture.

Each aperture can map the MMIO interface to the bitblt engine in the last 256 bytes of fb memory.

So your driver interface, on x86, would use the first aperture (no byte swapping), and reference bitblt engine using either the IO region (BAR2?) or enable the MMIO into the FB.

All that said, why not enable a better display option in QEMU? You're unlikely to be running this on real hardware GD5446, so why write a driver for it? And virtio-vga also provides better capabilities.


Top
 Profile  
 
 Post subject: Re: PCI Configuration Process
PostPosted: Thu Sep 09, 2021 7:24 am 
Offline
Member
Member
User avatar

Joined: Tue Sep 15, 2020 8:07 am
Posts: 264
Location: London, UK
thewrongchristian wrote:
bloodline wrote:
During PCI Bus Enumeration I can find the gfx board which Qemu emulates, and I'm going to use that to test writing PCI based driver code. I've found the Tech Doc for the QEmu graphics chip (the Cirrus Logic GD5446), http://www.vgamuseum.info/index.php/cpu ... 42814695bc

On page 67, this helpfully (though vaguely) lists all the available hardware regs, but I'm confused as to how to access them. The table give an I/O Port address, some of which appear to match up with the PCI Configuration register address/offsets. I'm having trouble picturing how this works as I'm used to hardware registers just sitting in the normal address space.

-Edit- I might have jumped the gun a bit here... So BAR0 is the framebuffer address. BAR1 is 0, but BAR2 is an address 32megs above the framebuffer... Is it possible the hardware registers are mapped here?


Looking at the spec, and based on what you're seeing, I suspect what's being emulated is the GD5446 revision B.

This maps BAR0 as the linear framebuffer, but based on section 9.2.7.3, this is mapped as 4MByte apertures into the framebuffer (the chip only supports 4MB of VRAM.)

[*] 0-4MB - No byte swapping.
[*] 4-8MB - 16-bit word byte swapping.
[*] 8-12MB - 32-bit work byte swapping.
[*] 12-16MB - Video aparture.

Each aperture can map the MMIO interface to the bitblt engine in the last 256 bytes of fb memory.

So your driver interface, on x86, would use the first aperture (no byte swapping), and reference bitblt engine using either the IO region (BAR2?) or enable the MMIO into the FB.

All that said, why not enable a better display option in QEMU? You're unlikely to be running this on real hardware GD5446, so why write a driver for it? And virtio-vga also provides better capabilities.


I fully agree, I'm not wedded to using the Cirrus chip, I jut want to learn how to access devices on the PCI bus. A graphics card seems like the best candidate for this as you can literally see the results of your actions as you do them. I'm totally happy to code for any display adaptor QEMU can emulate if I can find decent technical documents for it :) I've spent the afternoon trying to find out more about the Bochs Graphics Adaptor... But documentation is painfully sparse!

Ultimately I'm doing this so I can replace my PS/2 keyboard and mouse with USB drivers (I wouldn't mind also being able to use a blitter and having a real VBL interrupt too... a boy can dream).

I can use the PCI Config I/O Ports to read the configuration space, now I want to understand how I can actually read and write to the registers which must be mapped somewhere in the PCI space...

_________________
CuriOS: A single address space GUI based operating system built upon a fairly pure Microkernel/Nanokernel. Download latest bootable x86 Disk Image: https://github.com/h5n1xp/CuriOS/blob/main/disk.img.zip
Discord:https://discord.gg/zn2vV2Su


Last edited by bloodline on Thu Sep 09, 2021 9:54 am, edited 2 times in total.

Top
 Profile  
 
 Post subject: Re: PCI Configuration Process
PostPosted: Thu Sep 09, 2021 7:36 am 
Offline
Member
Member

Joined: Wed Oct 01, 2008 1:55 pm
Posts: 3191
Ethin wrote:
You can do a lot with the information provides you. Essentially, the PCI configuration process goes something like this:
  1. PCI device enumerator scans all PCI devices (including those behind bridges and such).
  2. A driver (say, a GFX adapter driver) loads and wants to access an intel GPU in particular. Lets assume that this intel GPU is a skylake GPU. The driver would do the following:
    1. The driver would begin scanning each discovered PCI device in the system.
    2. The driver would check the base class code (BCC), sub-class code (SCC), and program interface (PI) to determine the type of device. In this instance, for the BCC the driver would check for 03h or 04h, meaning a display controller or multimedia device, respectively. For the SCC, the driver would check for either 00h or 80h, meaning a VGA-compatible device or other multimedia device, respectively. And for the PI, the driver would check for 00h.
    3. The driver could perform other checks, such as scanning the vendor and device IDs as well.
    4. Once the driver has determined that it can handle the device in question, it would scan the BARs and map them. It might also enable bus mastering, interrupts, etc.
  3. Now that the driver has determined it can handle the device and has performed any other necessary PCI configuration steps that it needs, it can access the devices registers and they will be automatically translated into PCI configuration accesses.
So, in sum, any driver can just:
  1. Enumerate PCI devices
  2. Scan the base class code, sub-class code, and program interface, if applicable
  3. Scan other PCI device properties if applicable
  4. Perform any PCI configuration required
  5. Map the BAR ranges
  6. Ready to drive the device
HTH


I don't think this is a useful method. You would first probe all PCI devices and add everything you find to a cache. Then you provide functions to return them in sequence, by type or whatever your driver might want. When you load a device, it will use this interface to see if there is a device that it supports. Alternatively, you register drivers in a "database" by PCI IDs and load them as you find particular devices.


Top
 Profile  
 
 Post subject: Re: PCI Configuration Process
PostPosted: Thu Sep 09, 2021 11:43 am 
Offline
Member
Member

Joined: Mon Mar 25, 2013 7:01 pm
Posts: 5099
bloodline wrote:
On page 67, this helpfully (though vaguely) lists all the available hardware regs, but I'm confused as to how to access them. The table give an I/O Port address, some of which appear to match up with the PCI Configuration register address/offsets. I'm having trouble picturing how this works as I'm used to hardware registers just sitting in the normal address space.

Yeah, that table is pretty awful. The PCI configuration registers are sitting in the PCI configuration address space, and everything else is sitting in ordinary I/O space accessible at the legacy VGA I/O port addresses. Additionally, those legacy I/O ports can be mapped a second time according to BAR1, and a handful of registers accessible using legacy VGA I/O ports can be mapped a third time also according to BAR1.

bloodline wrote:
-Edit- I might have jumped the gun a bit here... So BAR0 is the framebuffer address. BAR1 is 0, but BAR2 is an address 32megs above the framebuffer... Is it possible the hardware registers are mapped here?

Unfortunately, the manual (and the QEMU code) says the registers you're looking for would be mapped with BAR1. Since BAR1 isn't populated, you'll have to fall back to using VGA I/O ports. (Or figure out why BAR1 isn't enabled, enable it, and assign a reasonable address to it.)


Top
 Profile  
 
 Post subject: Re: PCI Configuration Process
PostPosted: Thu Sep 09, 2021 5:57 pm 
Offline
Member
Member

Joined: Sun Jun 23, 2019 5:36 pm
Posts: 618
Location: North Dakota, United States
rdos wrote:
Ethin wrote:
You can do a lot with the information provides you. Essentially, the PCI configuration process goes something like this:
  1. PCI device enumerator scans all PCI devices (including those behind bridges and such).
  2. A driver (say, a GFX adapter driver) loads and wants to access an intel GPU in particular. Lets assume that this intel GPU is a skylake GPU. The driver would do the following:
    1. The driver would begin scanning each discovered PCI device in the system.
    2. The driver would check the base class code (BCC), sub-class code (SCC), and program interface (PI) to determine the type of device. In this instance, for the BCC the driver would check for 03h or 04h, meaning a display controller or multimedia device, respectively. For the SCC, the driver would check for either 00h or 80h, meaning a VGA-compatible device or other multimedia device, respectively. And for the PI, the driver would check for 00h.
    3. The driver could perform other checks, such as scanning the vendor and device IDs as well.
    4. Once the driver has determined that it can handle the device in question, it would scan the BARs and map them. It might also enable bus mastering, interrupts, etc.
  3. Now that the driver has determined it can handle the device and has performed any other necessary PCI configuration steps that it needs, it can access the devices registers and they will be automatically translated into PCI configuration accesses.
So, in sum, any driver can just:
  1. Enumerate PCI devices
  2. Scan the base class code, sub-class code, and program interface, if applicable
  3. Scan other PCI device properties if applicable
  4. Perform any PCI configuration required
  5. Map the BAR ranges
  6. Ready to drive the device
HTH


I don't think this is a useful method. You would first probe all PCI devices and add everything you find to a cache. Then you provide functions to return them in sequence, by type or whatever your driver might want. When you load a device, it will use this interface to see if there is a device that it supports. Alternatively, you register drivers in a "database" by PCI IDs and load them as you find particular devices.

This is a useful method if your just wanting to get a driver working. Its also useful for monolithic (non-modular) kernels. And this is only one way to do it. There are lots of different ways of implementing PCIe -- there isn't a "right" and "wrong" way necessarily other than the enumeration of devices.


Top
 Profile  
 
 Post subject: Re: PCI Configuration Process
PostPosted: Fri Sep 10, 2021 3:17 am 
Offline
Member
Member
User avatar

Joined: Tue Sep 15, 2020 8:07 am
Posts: 264
Location: London, UK
Octocontrabass wrote:
bloodline wrote:
On page 67, this helpfully (though vaguely) lists all the available hardware regs, but I'm confused as to how to access them. The table give an I/O Port address, some of which appear to match up with the PCI Configuration register address/offsets. I'm having trouble picturing how this works as I'm used to hardware registers just sitting in the normal address space.

Yeah, that table is pretty awful. The PCI configuration registers are sitting in the PCI configuration address space, and everything else is sitting in ordinary I/O space accessible at the legacy VGA I/O port addresses. Additionally, those legacy I/O ports can be mapped a second time according to BAR1, and a handful of registers accessible using legacy VGA I/O ports can be mapped a third time also according to BAR1.


Ok, I get it, so that was all about using x86 IOPorts to access the PCI Configuration "registers" as well as the VGA IOPorts. I'm not keen on using the old x86 IOPorts, for no other reason than I'm just not used to using them. I have to make a few more mental leaps than I'm comfortable with. I guess I'm going to have to look into ACPI at some point :(

Quote:

bloodline wrote:
-Edit- I might have jumped the gun a bit here... So BAR0 is the framebuffer address. BAR1 is 0, but BAR2 is an address 32megs above the framebuffer... Is it possible the hardware registers are mapped here?

Unfortunately, the manual (and the QEMU code) says the registers you're looking for would be mapped with BAR1. Since BAR1 isn't populated, you'll have to fall back to using VGA I/O ports. (Or figure out why BAR1 isn't enabled, enable it, and assign a reasonable address to it.)


Ahhh yes, I see that now! PCI Config offset 14 (BAR1) can be configured via registers CF8, CF4, and CF3... Which is oddly cryptic as such "registers" don't appear to be located anywhere... Anyway, I might give-up with the old Cirrus chip and follow @thewrongchristian 's advice and try to find some documentation for QEMU's virtio display adaptor...

_________________
CuriOS: A single address space GUI based operating system built upon a fairly pure Microkernel/Nanokernel. Download latest bootable x86 Disk Image: https://github.com/h5n1xp/CuriOS/blob/main/disk.img.zip
Discord:https://discord.gg/zn2vV2Su


Top
 Profile  
 
 Post subject: Re: PCI Configuration Process
PostPosted: Fri Sep 10, 2021 4:53 am 
Offline
Member
Member

Joined: Wed Oct 01, 2008 1:55 pm
Posts: 3191
Octocontrabass wrote:
bloodline wrote:
I’m still fuzz as to how exactly the BARs work. As my PCI driver builds a database of what it finds on the PCI BUS, I notice that some devices have their BAR registers filled with values, some just have zeros. Am I able to just use whatever value they have set, or should I write my own values there, I assume the documentation for the devices will explain what each BAR points to!?

During boot, PC firmware enumerates PCI and assigns reasonable default values to the BARs. You can use those values. You don't need to write your own values unless you're working with PCI hotplug or certain non-PC hardware.

The type (memory or I/O) and amount of the resources assigned to each BAR are fixed in hardware. For example, a device that needs 1 MiB of MMIO will have a BAR that only allows memory addresses aligned to 1 MiB boundaries. This is how the firmware (and you) can assign resources without knowing anything about the device.

When you're writing drivers for PCI devices, the documentation for each device should explain the purpose of that device's BARs.


That's my impression too. In basically all PCs I've tested on (perhaps excluding one), the BARs are fixed by the BIOS, and so there is no use in trying to find your own areas or memory blocks for them. Besides, I even assume that the memory map takes into account that BIOS has assigned certain memory areas to PCI devices, so if you manipulate with those, you might lose memory.

OTOH, it's sometimes necessary to enable bus mastering and similar stuff, but I'd say BIOS does most of the work on this and there is no need to redo it.


Top
 Profile  
 
 Post subject: Re: PCI Configuration Process
PostPosted: Fri Sep 10, 2021 5:13 am 
Offline
Member
Member

Joined: Thu May 17, 2007 1:27 pm
Posts: 999
rdos wrote:
In basically all PCs I've tested on (perhaps excluding one), the BARs are fixed by the BIOS, and so there is no use in trying to find your own areas or memory blocks for them.

That is true on x86, but on basically all other architectures, the OS is responsible for setting up the BARs (e.g., aarch64, RISC-V). And yes, you generally cannot use all address ranges for BARs, only certain address ranges are routed by the chipset to the PCIe root complex.

_________________
managarm: Microkernel-based OS capable of running a Wayland desktop (Discord: https://discord.gg/7WB6Ur3). My OS-dev projects: [mlibc: Portable C library for managarm, qword, Linux, Sigma, ...] [LAI: AML interpreter] [xbstrap: Build system for OS distributions].


Top
 Profile  
 
 Post subject: Re: PCI Configuration Process
PostPosted: Fri Sep 10, 2021 5:50 am 
Offline
Member
Member

Joined: Wed Oct 01, 2008 1:55 pm
Posts: 3191
Korona wrote:
And yes, you generally cannot use all address ranges for BARs, only certain address ranges are routed by the chipset to the PCIe root complex.


I don't understand that. I'd assume that the bus mastering function must work on the whole address space, and BARs are just address ranges that the PCI device should respond to rather than memory.


Top
 Profile  
 
 Post subject: Re: PCI Configuration Process
PostPosted: Fri Sep 10, 2021 6:11 am 
Offline
Member
Member

Joined: Wed Oct 01, 2008 1:55 pm
Posts: 3191
thewrongchristian wrote:
bloodline wrote:
During PCI Bus Enumeration I can find the gfx board which Qemu emulates, and I'm going to use that to test writing PCI based driver code. I've found the Tech Doc for the QEmu graphics chip (the Cirrus Logic GD5446), http://www.vgamuseum.info/index.php/cpu ... 42814695bc

On page 67, this helpfully (though vaguely) lists all the available hardware regs, but I'm confused as to how to access them. The table give an I/O Port address, some of which appear to match up with the PCI Configuration register address/offsets. I'm having trouble picturing how this works as I'm used to hardware registers just sitting in the normal address space.

-Edit- I might have jumped the gun a bit here... So BAR0 is the framebuffer address. BAR1 is 0, but BAR2 is an address 32megs above the framebuffer... Is it possible the hardware registers are mapped here?


Looking at the spec, and based on what you're seeing, I suspect what's being emulated is the GD5446 revision B.

This maps BAR0 as the linear framebuffer, but based on section 9.2.7.3, this is mapped as 4MByte apertures into the framebuffer (the chip only supports 4MB of VRAM.)

[*] 0-4MB - No byte swapping.
[*] 4-8MB - 16-bit word byte swapping.
[*] 8-12MB - 32-bit work byte swapping.
[*] 12-16MB - Video aparture.

Each aperture can map the MMIO interface to the bitblt engine in the last 256 bytes of fb memory.

So your driver interface, on x86, would use the first aperture (no byte swapping), and reference bitblt engine using either the IO region (BAR2?) or enable the MMIO into the FB.

All that said, why not enable a better display option in QEMU? You're unlikely to be running this on real hardware GD5446, so why write a driver for it? And virtio-vga also provides better capabilities.


Why do video cards map the LFB to BARs? Based on my own experience with programming PCIe devices, the BARs are horribly slow to access, mostly because the host processor cannot create long requests and rather do 32-bit or 64-bit operations, which scale poorly over PCIe. To create real speed, you must create large PCIe transfers over the PCIe bus, and this can be done by creating memory schedules. Actually, every other modern PCIe device will only have the configuration in BARs and then depend on memory schedules. So, why do not video cards do this? Legacy reasons? Is this why Intel graphics cards are so hopelessly slow?


Top
 Profile  
 
 Post subject: Re: PCI Configuration Process
PostPosted: Fri Sep 10, 2021 6:40 am 
Offline
Member
Member

Joined: Thu May 17, 2007 1:27 pm
Posts: 999
rdos wrote:
Korona wrote:
And yes, you generally cannot use all address ranges for BARs, only certain address ranges are routed by the chipset to the PCIe root complex.


I don't understand that. I'd assume that the bus mastering function must work on the whole address space, and BARs are just address ranges that the PCI device should respond to rather than memory.

Yes, but host bridges might not (= don't, on some hardware) implement address decoding for the full 64 bit address space for the purpose of config space access, even if they do implement it for DMA (= bus mastering). For example, hardware might chose to do that because config space access have different memory ordering semantics than bus mastering (i.e., config space writes must not be posted; in ARM terms, they are nGnRnE (no gather, no reordering, no early write acknowledgement) and not nGnRE like normal MMIO.

On non-x86 platforms, it's also common that PCIe devices see RAM at different addresses than the CPU, i.e., inbound and outbound windows of the host bridge are not identical.

_________________
managarm: Microkernel-based OS capable of running a Wayland desktop (Discord: https://discord.gg/7WB6Ur3). My OS-dev projects: [mlibc: Portable C library for managarm, qword, Linux, Sigma, ...] [LAI: AML interpreter] [xbstrap: Build system for OS distributions].


Top
 Profile  
 
 Post subject: Re: PCI Configuration Process
PostPosted: Fri Sep 10, 2021 7:18 am 
Offline
Member
Member

Joined: Wed Oct 01, 2008 1:55 pm
Posts: 3191
Korona wrote:
rdos wrote:
Korona wrote:
And yes, you generally cannot use all address ranges for BARs, only certain address ranges are routed by the chipset to the PCIe root complex.


I don't understand that. I'd assume that the bus mastering function must work on the whole address space, and BARs are just address ranges that the PCI device should respond to rather than memory.

Yes, but host bridges might not (= don't, on some hardware) implement address decoding for the full 64 bit address space for the purpose of config space access, even if they do implement it for DMA (= bus mastering). For example, hardware might chose to do that because config space access have different memory ordering semantics than bus mastering (i.e., config space writes must not be posted; in ARM terms, they are nGnRnE (no gather, no reordering, no early write acknowledgement) and not nGnRE like normal MMIO.

On non-x86 platforms, it's also common that PCIe devices see RAM at different addresses than the CPU, i.e., inbound and outbound windows of the host bridge are not identical.


OK. I also assume this is related to the BIOS remapping of ordinary RAM to other addresses so it's not lost. This remapping must be restricted in some ways, and if certain areas have no RAM it would be easier & faster to always involve the PCI address decoder directly.


Top
 Profile  
 
 Post subject: Re: PCI Configuration Process
PostPosted: Fri Sep 10, 2021 12:41 pm 
Offline
Member
Member

Joined: Mon Mar 25, 2013 7:01 pm
Posts: 5099
bloodline wrote:
PCI Config offset 14 (BAR1) can be configured via registers CF8, CF4, and CF3... Which is oddly cryptic as such "registers" don't appear to be located anywhere...

I took another look at the datasheet and it's in there, section 3.2, but it's not something you can manipulate in software: it's controlled by whether resistors are wired up to specific lines of the memory data bus.

But, QEMU is supposed to emulate a card with BAR1 enabled. Are you sure the BARs you're looking at belong to the Cirrus card?

bloodline wrote:
Anyway, I might give-up with the old Cirrus chip and follow @thewrongchristian 's advice and try to find some documentation for QEMU's virtio display adaptor...

The VIRTIO specification should help.

rdos wrote:
Why do video cards map the LFB to BARs?

Probably for firmware. Actual GPU drivers use DMA to move things around, they usually don't access the framebuffer directly. (Does "memory schedules" refer to a type of DMA?)


Top
 Profile  
 
 Post subject: Re: PCI Configuration Process
PostPosted: Fri Sep 10, 2021 2:23 pm 
Offline
Member
Member

Joined: Wed Oct 01, 2008 1:55 pm
Posts: 3191
Octocontrabass wrote:
Probably for firmware. Actual GPU drivers use DMA to move things around, they usually don't access the framebuffer directly. (Does "memory schedules" refer to a type of DMA?)


Yes. The kernel driver will construct a memory schedule of work to be done, and then the PCIe device will read & write the schedule with DMA (bus mastering).

Anyway, this seems to explain why performance with LFB is slower than using the GPU interface. Something that appears to be a bit illogical at first. However, BARs never have the same performance as bus mastering. It also explains why the LFB should only be written and not read. When doing a read of a BAR, the CPU will need to wait for the PCIe device to read the contents from local RAM and then send it back as a PCIe transaction. With writing using the correct caching settings & a decently implemented PCIe device, the CPU shouldn't need to wait for the PCIe device to handle the request.


Top
 Profile  
 
 Post subject: Re: PCI Configuration Process
PostPosted: Fri Sep 10, 2021 3:40 pm 
Offline
Member
Member

Joined: Tue Apr 03, 2018 2:44 am
Posts: 401
rdos wrote:
Octocontrabass wrote:
Probably for firmware. Actual GPU drivers use DMA to move things around, they usually don't access the framebuffer directly. (Does "memory schedules" refer to a type of DMA?)


Yes. The kernel driver will construct a memory schedule of work to be done, and then the PCIe device will read & write the schedule with DMA (bus mastering).

Anyway, this seems to explain why performance with LFB is slower than using the GPU interface. Something that appears to be a bit illogical at first. However, BARs never have the same performance as bus mastering. It also explains why the LFB should only be written and not read. When doing a read of a BAR, the CPU will need to wait for the PCIe device to read the contents from local RAM and then send it back as a PCIe transaction. With writing using the correct caching settings & a decently implemented PCIe device, the CPU shouldn't need to wait for the PCIe device to handle the request.


I think you're over thinking this.

This is the CL GD5446 we're talking about, a value PCI GFX chipset from the 1990's. Its "GPU" was a simple blitter, and all the VRAM was on the device side of the PCI bus, and relied on write combining for burst performance when writing to the FB.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 35 posts ]  Go to page Previous  1, 2, 3  Next

All times are UTC - 6 hours


Who is online

Users browsing this forum: SemrushBot [Bot] and 66 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group