Are microkernels more difficult to develop?
Micro-kernels are a lot easier to develop than monolithic kernels; however, you shouldn't let that give you a false sense of optimism.
For the equivalent amount of functionality to a monolithic kernel (e.g. the micro-kernel itself, plus the drivers, services, etc in user-space, plus the communication protocols needed to make it work), micro-kernels are harder than monolithic kernels.
I can see lots of issues, for example, the difficulty in debugging the interaction between several different programs, needing to get shared library support working to avoid wasting lots of memory, needing to have a careful design of how the different servers communicate with each other to avoid big redesigns later on down the line.
Debugging is actually easier (because drivers, etc are in user-space you can use normal "user-space debugging" tools; because everything is isolated bugs are more likely to be instantly visible rather than "hard to diagnose random corruption of something unrelated"; because everything is isolated you don't need to care about other pieces when debugging one piece).
Typically (not always) shared libraries waste memory because they're "shared" by a small number of processes that each only uses part of the shared library (in other words, you waste RAM for parts of the shared library that aren't actually used by any process). In addition to wasting memory they add overhead (due to compilers and link-time optimisers not being able to inline and/or optimise the library's functions; which can include optimisations that reduce the memory consumed, like constant folding and dead code elimination). However; shared libraries are not really any worse for micro-kernels than they are for monolithic kernels (they just always bad, except for special cases like system libraries for programming languages if a lot of processes use that programming language - e.g. "libC").
Needing careful design of the communication between pieces (drivers, services, etc) is the real problem; however (even for monolithic kernels where it's "communication between pieces in kernel") careful design of the communication is beneficial to avoid "long term churn" (where other pieces break and have to be updated/modified/rewritten because you changed something else). Note that for a hobbyist monolithic kernel there's very little chance of "long term" churn because there's very little risk of "long term" (e.g. never any real need to maintain backward compatibility with previous versions, etc).
Are there many real advantages to the microkernel design? (I know about the stability/security benefits of having drivers/servers run in user mode)
Security/stability, debugging, and making the importance of careful design more obvious are all benefits of micro-kernels. These also have secondary benefits (making it possible to trust third-party drivers and services, making it faster/easier for people to write drivers and services, etc).
Beyond that, it depends on how the OS is designed. If you want to put in some extra effort; it's easier to do fault tolerance with a micro-kernel, easier to do a distributed system with a micro-kernel, easier to do real time tasks with a micro-kernel, easier to do "high availability" (e.g. update drivers, etc without rebooting) with a micro-kernel, etc.
Can the be made quickly enough to be practical? (L4 seems to be quick, but implementing a POSIX API over it would probably punish it)
POSIX is extremely bad for micro-kernels - it's not designed to avoid or mitigate the additional cost of inter-process communication that an OS designed for a micro-kernel requires. Sadly, lots of micro-kernels (in the past) have implemented POSIX, and this has caused micro-kernel's to a far worse reputation for performance than they deserve. Note: Don't get me wrong here - a micro-kernel must sacrifice some performance to gain other benefits (security, etc), it's just that the amount of performance sacrificed is exacerbated by POSIX compliance.