Hi,
onlyonemac wrote:
Brendan wrote:
Choose any type of application you like and any output device you like. Then choose any 2 combinations of one or more input devices (e.g. speech alone vs. keyboard+mouse). Finally; describe in detail how that front end (for your one chosen application and your one chosen output device) must be different depending on which of 2 and only 2 combinations of input devices are being used. I expect you to anticipate my reply and try your best to find case/s that are irrefutable (including attempting to find work-arounds to any problems yourself). Choose the application and devices wisely. Assume that you have one and only one chance to prove beyond any doubt that you are not a liar.
Output device:
Input devices:
- Keyboard and mouse
- Handwriting recognition
Application:
For keyboard and mouse input:
Frontend will take text input from the keyboard and add it to the document. Frontend will also implement keyboard shortcuts (let's ignore the details of whether these are handled in the OS, the GUI toolkit, or the application itself) to allow the user to quickly access common commands (change formatting, insert a picture, save the document, etc.). Frontend will allow the user to move insertion point by clicking the mouse at the desired location, and to select text by dragging through the text. Frontend will display menus and/or toolbars with all available commands (change formatting, insert a picture, save the document, etc.) which may be selected with the mouse or navigated with the keyboard (alt-f to open the file menu, for example).
For handwriting recognition input:
Frontend will take text input from the handwriting recognition system (implemented in either software or hardware) and add it to the document. Frontend will also implement one or more gestures to enter a "command" mode, whereafter the user can write the name of a command (with the available commands probably correlating to those that appear in the menus of the keyboard and mouse frontend) to have it performed. Frontend may implement a degree of "intelligence" to determine the intended command from a slightly incorrect name (e.g. if the user writes "add picture" instead of the correct command "insert image", the "insert image" command will nevertheless be performed). Frontend will use a similar system of gestures and/or commands to allow the user to move the insertion point and to select text.
The differences:
The first frontend must display menus and/or toolbars on the screen, whereas the second frontend can use the entire screen for the document. The first frontend lists the available commands in a menu and receives a "click" event on the command selected by the user, whereas the second frontend receives the name of the command entered by the user directly (which furthermore may not be a valid command). The first frontend accepts input in multiple "modes" (i.e. "edit" mode and "command" mode) simultaneously (in terms of that the user can type on the keyboard and select a menu option with the mouse without having to explicitly change mode), whereas the second frontend can accept input in one mode at a time only because the same process of writing text on the handwriting recognition device is used to both enter text into the document and enter commands.
Why the frontends need to be different:
To navigate the first frontend with the handwriting recognition system would require the use of gestures to navigate the menus like how they are navigated with the keyboard (especially if we are going to abstract the difference between the keyboard and the handwriting recognition system). So, for example, to save the document the user would have to draw one gesture to represent the "alt" key, then write the letter "f", then write the letter "s" to jump to the "save" option. Or they might have to draw gestures to navigate up and down if there isn't a shortcut key for the desired option (like how they would navigate around the menus with the arrow keys on a keyboard). Or they would have to draw one gesture to represent the "ctrl" key, then write the letter "s" to input the keyboard shortcut for the "save" option. In short, trying to abstract input devices by e.g. treating a handwriting recognition system the same as a keyboard (i.e. something that sends "keypress" events or "navigate up"/"navigate down" events) would in a case like this make the interface more difficult to use for the handwriting recognition user than it needs to be.
Alternatively, you might try to make the keyboard and mouse frontend the same as the handwriting recognition frontend. If we ignore the mouse for a moment, this leaves us with keyboard input taking the place of the handwriting recognition system. So now for the user to navigate the interface with the keyboard, they would be required to first press a particular key combination to enter "command" mode (to represent the enter-command-mode gestures on the handwriting recognition system) and then type the name of the command that they want, instead of being able to see a nice list of all the commands and navigate straight to them by pressing a single letter or a few arrow keys.
Note that most of these differences may be abstracted by the OS and/or the device drivers, but in such a setup the application would have to say "these are my available commands and these are the input modes that I use" to the OS and then the OS determines the best way to present the interface to the user (e.g. if there's a mouse present it can display a set of menus, if there's a keyboard present it can allow the use of keyboard shortcuts, if there's a handwriting recognition system present it can allow the user to write the name of the command in a "command" mode). While this is a perfectly plausible way of structuring the interface subsystem of the OS, it restricts the flexibility for applications to design their own interfaces beyond choosing what commands to present to the user; furthermore, I believe, you are structuring your events along the lines of "navigate this way" or "enter this letter" rather than "select this command", which as I have explained in the previous two paragraphs is going to make own or more interfaces ("frontends") less efficient than it should be.
First, the hand-writing system is typically a monochrome LCD touch screen designed for use with a stylus; which means it'd be trivial to have a "touch pad mode" that can be used for navigating menus, etc. It would also be trivial to add (e.g.) buttons and/or scroll bars around the edges of the hand-writing system that can be used without a special mode (including buttons for up/down/left/right events).
For your hand-writing system there's no way for the user to discover which commands do what (which is hideously broken), and a menu system is required for that purpose. The menu system (that is required for both keyboard and handwriting, even if its just so the user can discover commands and/or keyboard shortcuts) would do what most/all menu systems do and have things like "
Open File", "
Save", etc; where the underscore tells the user the keyboard shortcut or command. Essentially, you see "
Open File" in the menu and know that "alt + O" is the keyboard shortcut to open a file, and know that writing "
O" in the handwriting system is the command to open a file. When a user presses "alt + Z" the keyboard driver sends the "z command" without knowing/caring if the front end supports the command. When the user draws "
Z" the handwriting system sends the "z command" without knowing/caring if the front end supports the command.
Note that the menu system is part of a widget service (and implemented in a separate process). When the menu system is started it loads a file describing the menus for the current locale (for internationalisation) - e.g. for an English user it might load the file "yourapp/menu.en" and for a Russian user it might load the file "yourapp/menu.ru". The front end just forwards "user commands" (whether they're key presses or drawn by handwriting) to the menu system, and when the menu system is open the front end forwards mouse clicks and "handwriting system in touchpad mode double taps" (which are the same event as mouse clicks) to the menu system. The menu system sends back whatever the file (for the user's locale) says to send back. For English, for the "S command" (from the "alt+S" keyboard shortcut or from drawing "
S") the file might say to send back a "save file message", and clicking on "
Save File" in the menu (with either mouse or "handwriting system in touchpad mode") would also cause the menu system to send back the exact same "save file message". For Russian, for the "C" command the file might say to send back a "save file message", and clicking on "сохранить файл" in the menu (with either mouse or "handwriting system in touchpad mode") would also cause the menu system to send back the same "save file message". The front end just receives the "save file message" that was defined/created by the front-end's own developer (e.g. using an "enum"). The front end doesn't care what the input devices are, and also doesn't care what the current locale is (and doesn't care that the Russian word for "save" is "сохранить" where it makes no sense at all to use "
S" for Russian).
Of course internationalisation for everything else uses a similar scheme based on one file per locale/language. You don't print "Hello world", you ask a service to find "string #1 for the user's locale/language" and print that. This means that front-ends can be translated to different languages just by editing files; which is important because very few programmers know every language. It also means that (with a suitable search order) the end user can override these files. E.g. I could create a "home/bcos/yourapp/menu.en" file (and a "home/bcos/yourapp/strings.en" file) that would be used instead of the files that came with the front end, and I could completely change the menu system to suit myself; and if I want to change "
Open File" to "
Load File" then that's perfectly fine - pressing "alt+L" on the keyboard or drawing "
L" on the handwriting system will cause the same "open file message" to be sent from the menu system to the front-end.
Brendan wrote:
I expect you to anticipate my reply and try your best to find case/s that are irrefutable (including attempting to find work-arounds to any problems yourself).
I don't think you've found a case that is irrefutable. If anything, I'd be tempted to say you've done bizarre things (like not having menus for the handwriting system); in addition to relying on assumptions (like "
the application would have to say "these are my available commands and these are the input modes that I use" to the OS") that aren't necessary (and can't work for other reasons - like proper internationalisation); specifically to create problems that should not have existed.
Try again.
Cheers,
Brendan