Huh, I assumed you'd typoed and meant 6800 instead of 68000, but I checked and the QL did indeed use the 68K.
The 6809 was in the 6800 family, and was an 8-bit CPU, but was closer to a 16-bit system in several ways. Still, I suppose that the use it is put to is the really key factor.
Motorola didn't have a true 16-bit processor, at least not one that was used in any home computers I know of. With the 68k, Motorola decided to leapfrog past 16-bit to 32-bit, but needed to be able to interface with existing 16-bit hardware, so they used a 16-bit pin-out; they even had an 8-bit version, the 68008, because they'd underestimated the time it would take for the rest of the industry to go to 16-bit. Their estimates hadn't taken into account the growing base of home computer users, because, well, none of the chip makers (except MOS Technology, perhaps) thought that home computers would be anything more than a brief fad and expected (correctly, but not as much as they thought) that their core business would be in embedded microcontrollers.
The 6800 drew heavily on the design of the PDP-11, so when the time came to design the 68K, they followed the same path from the 6800 that Digital had going from the PDP-11 to the VAX: a similar design but expanded in every direction, and what at the time was large set of general registers for a microprocessor (eight, with no fixed accumulator or index registers; there were also eight address registers, two stack registers, a status register, and the IP). The instruction set was also vastly enriched, though nowhere near the extent that the VAX's was.
Intel had the same plan even earlier (and made the same underestimate that Motorola had), but for logistical reasons slapped together the 8086 (a 16-bit extended version of the 8080A - they couldn't quite make it source compatible, but by using segmented memory they made it easy to adapt 8080 and Z80 code to it so long as the coder used what would later be called 'tiny model') as a stopgap while they put the finishing touches on the i432.
Problem is, the
i432 was far too ambitious for the chip manufacturing techniques of the late 1970s - a super-CISC design meant to have hardware support for abstract data types (or OOP, depending on who you ask, not that most people see a difference these days),
capability-based addressing,
value-return parameter passing, and garbage collection. When it was clear that a) most of the peripheral hardware around was still 8-bit, leading to sluggish sales for the 8086, and b) the i432 project was running out of control, they added the 8088 to squeeze what they assumed would be an extra year or two out of the 8086, hoping that the problems Motorola, Zilog, and National Semiconductor were having with their 32-bit designs would give them a chance to finish the i432 before they got theirs out. Oh, the irony.