Keep in mind that the classifications are mainly a way to talk about and compare different designs - it is a pattern language, not a set of checkboxes.
It's that same problem one gets whenever someone tries to use a descriptive language as a prescriptive one. It comes up all the time when talking about design patterns (which, keep in mind, were mainly meant for understanding
existing code when refactoring, not for building new software).
Hell, I've seen it come up in homebrewing when discussing beer styles, where you get arguments over whether a given beer 'is a' New England IPA versus an English IPA versus a Northwestern IPA (or whatever style it is closest to and/or meant to fit into). The important part is what it actually is - categorization is mostly to make it easier to discuss with others.
As for this particular question, the matter might seems to mostly be 'do the low-level drivers run in Supervisor/System privilege, or in User privilege?', but it is a lot more complex than that.
If there are no runtime (as opposed to boot-up) drivers other those needed to multiplex the CPU, interrupts, and memory, and userland processes can access the hardware directly when allowed to by the kernel, then it is definitely an exokernel. Strictly speaking, exokernels don't have drivers at all; the given process is free to access the hardware with any sort of device code it chooses, so long as it follows the system's rules about device and memory sharing. They can use shared libraries, but they don't need to; there can be unique driver code that is specific to a given program. The idea here is that each application can optimize the device-driving code to its exact needs, rather than having generalized, abstracted device drivers that aren't optimal for any one program but must be used by all.
One can view a
type-I hypervisor as a specialized exokernel which a) runs in a separate 'hypervisor' mode, and b) can multiplex entire operating systems (with their own supervisor mode), rather than just individual userland processes.
If the OS has true drivers which (aside from any boot-up drivers) all run as separate userland processes, and uses a message-passing system for both driver calls and system calls, it is definitely a microkernel. Indeed, the use of message-passing for IPC and synchronization - and
only message-passing, even if other abstractions are built on top of that - is more definitive of micros than userspace drivers is, as the original micro concept was intended for systems without separate privilege levels. Equally implicit in this is that the drivers are themselves Actors (in the
Actor Model sense), and do not share memory with the kernel or other userland processes. The whole question of where the driver runs is secondary in a micro-kernel, as the term was initially defined, as there was no kernel space for most of the early ones. If it doesn't exclusively use message-passing for the underlying IPC and mutex systems, and the drivers aren't separate processes which the caller sends messages to and receives messages from, then it isn't a 'real' microkernel.
If, conversely, the drivers are linked into the kernel when the kernel is built, and all run in kernel space, it is definitely a monolithic kernel - if the drivers aren't part of the kernel binary itself, it can't truly be a monolithic kernel. Early Unix systems were monolithic, but later ones went away from that.
If the drivers can be loaded at boot time, or dynamically during runtime, but run in kernel space (or a separate driver space, which some hardware still supports but has fallen out of favor since the 1980s with the rise of Unix), then it is a modular kernel. This includes systems such as Linux, later versions of Unix, and Multics, as well as any system that uses a separate driver privilege level (which was actually the classic kernel model; while some very early experimental OSes were monolithic, once hardware memory protection with separate privilege levels was introduced, modular kernels became the dominant design until Unix - with its strictly two-level privilege model - came along and pushed them aside for unrelated reasons).
If the device-driving code is generated dynamically by the kernel at runtime from a complex of driver templates, with optimizations applied to the specific circumstance of the call, then it is a synthesizing kernel. So far, there is only one notable example of this, and even that wasn't a 'pure' system - it kinda-sorta virtualized each process (the so-called 's-machines'), so that the 'supervisor mode' was specialized on a per-process basis. A modern interpretation would probably use a containerizing hypervisor for the primary kernel, to get a result somewhere between an exokernel and a containerizing system with very minimal 'rump' OSes for each process.
If some drivers run in kernel/driver space, and others in user space, or if the drivers are split between a low-level kernel section and a high-level userspace section (as sounds like is the case here), then it could reasonably be called a hybrid kernel. However, that term is a lot 'fuzzier' than most of the others, being more of a term if inclusion than exclusion.
But again, these are just ways of describing different design philosophies; reality is usually a lot messier. Very few OS kernels actually fit neatly into any one of these pigeonholes.