Actually, it (pt 2, not 1) sounds an aweful lot like
Xen. Xen can either use paravirtualization (i.e. generally requires software modifications), or take advantage of VT (and AMD's version) on newer processors to do hardware assisted virtualization.
Xen is actually quite fast (far faster than BOCHs, or even normal Qemu... probably similar to KQemu), and has pretty good support in the Linux world (it was the Big Thing until KVM basically stole most of its thunder). IIRC, Xen's architecture relies on a
privileged Operating System running in 'Domain 0' (aka dom0), generally Linux, which is given special permission to control the other virtual machines (dubbed 'domain U' or 'domU'), as well as direct access to the hardware. The domU machines wouldn't be given direct access to the hardware (normally at least- Xen at least used to support PCI pass-through, so, say, if you had a second network adaptor, you could basically connect it directly to one of the VMs). The memory could be allocated directly to the VM when it begins booting up (ala Qemu/Bochs). Another option would be to use memory hotplugging (if the OS supports it), or use a custom driver that upon boot up reserves most of the system's memory for itself (which it then tells the hypervisor it controls and the HV can take it back for itself), and as more memory is needed the driver can ask the hypervisor for chunks back, which it then 'frees' back to the virtual machine's operating system to use.
Although, architecturally, the OP's idea sounds a bit more like
KVM (not to be confused with a KVM switch), as the virtual machine monitor seems like it would be integrated directly into the operating system, and not really a stand-alone piece of code that depends on a 'dom0' to control the devices.
Edit: I wrote 'VT-d' by mistake, VT-d is Intel's version of an IOMMU (to make it possible to virtualize DMA), while 'VT' is their virtualization extension which is really what made hardware assisted virtualization possible on x86 (partly, it added a "Ring -1").