Quote:
On the contrary, treating mouse axes as keys simplifies things. In my window manager I will need to parse keyboard events anyways so I can interpret Super + any other key as a command (e.g. on my Linux desktop Super + C switches focus to the next window, which I will also implement in my WM).
Although a window manager processes both mouse and keyboard inputs, their processing should have little in common. With key and character input, there is typically a concept of a "current" window that will receive the input, whereas for mouse input, movement controls a cursor and button presses are sent to whatever is under the cursor. Yes, I've used systems where this wasn't the case, which wasn't much fun.
Quote:
Mouse{X,Y} means a movement happened along a certain axis.
Except they don't, when those are remapped joystick axes. Then they represent the current absolute position of the stick. When using a joystick to control a cursor, the position is sampled at regular intervals, converted to a velocity and integrated over time. With a mouse, each report represents a definite distance that the mouse has moved since the last report. So you basically end up making each mouse driver having to simulate a joystick or vice versa.
In games, using a joystick for aiming simulates the challenge of handling a heavy firearm in real life, or it is done because it is convenient to have all game controls accessible without shifting your hand. Clicking on buttons in an UI is not supposed to be challenging, that's why no one expects to operate a computer with a joystick. In the rare case where you would want to do this, it is better to have a separate component that translates joystick input to cursor input, thereby isolating the UI from idiosyncracies of joysticks.